bugfix for openEuler 20.03 @20210329 *** BLURB HERE ***
Dan Carpenter (2): net/x25: prevent a couple of overflows staging: rtl8188eu: prevent ->ssid overflow in rtw_wx_set_scan()
Dave Airlie (1): drm/ttm/nouveau: don't call tt destroy callback on alloc failure.
Filipe Manana (1): btrfs: fix race when cloning extent buffer during rewind of an old root
Kan Liang (1): perf/x86/intel: Fix a crash caused by zero PEBS status
Li ZhiGang (1): staging: TCM: add GMJS(Nationz Tech) TCM driver.
Liu Shixin (1): mm/vmscan: fix uncleaned mem_cgroup_uncharge
Lu Jialin (1): cgroup: Fix kabi broken by files_cgroup introduced
Piotr Krysiuk (2): bpf: Prohibit alu ops for pointer types not defining ptr_limit bpf: Fix off-by-one for area size in creating mask to left
Tyrel Datwyler (1): PCI: rpadlpar: Fix potential drc_name corruption in store functions
Yang Yingliang (1): config: enable config TXGBE by default
Zhang Ming (1): arm64/mpam: fix a possible deadlock in mpam_enable
Zhen Lei (1): config: arm64: build TCM driver to modules by default
zhenpengzheng (2): net: txgbe: Add support for Netswift 10G NIC x86/config: Set CONFIG_TXGBE=m by default
arch/arm64/configs/hulk_defconfig | 2 + arch/arm64/configs/openeuler_defconfig | 3 + arch/arm64/kernel/mpam/mpam_device.c | 4 +- arch/x86/configs/openeuler_defconfig | 2 + arch/x86/events/intel/ds.c | 2 +- drivers/gpu/drm/nouveau/nouveau_sgdma.c | 9 +- drivers/gpu/drm/ttm/ttm_tt.c | 3 - drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/netswift/Kconfig | 20 + drivers/net/ethernet/netswift/Makefile | 6 + drivers/net/ethernet/netswift/txgbe/Kconfig | 13 + drivers/net/ethernet/netswift/txgbe/Makefile | 11 + drivers/net/ethernet/netswift/txgbe/txgbe.h | 1260 +++ .../net/ethernet/netswift/txgbe/txgbe_bp.c | 875 ++ .../net/ethernet/netswift/txgbe/txgbe_bp.h | 41 + .../net/ethernet/netswift/txgbe/txgbe_dcb.h | 30 + .../ethernet/netswift/txgbe/txgbe_ethtool.c | 3381 +++++++ .../net/ethernet/netswift/txgbe/txgbe_hw.c | 7072 +++++++++++++++ .../net/ethernet/netswift/txgbe/txgbe_hw.h | 264 + .../net/ethernet/netswift/txgbe/txgbe_lib.c | 959 ++ .../net/ethernet/netswift/txgbe/txgbe_main.c | 8045 +++++++++++++++++ .../net/ethernet/netswift/txgbe/txgbe_mbx.c | 399 + .../net/ethernet/netswift/txgbe/txgbe_mbx.h | 171 + .../net/ethernet/netswift/txgbe/txgbe_mtd.c | 1366 +++ .../net/ethernet/netswift/txgbe/txgbe_mtd.h | 1540 ++++ .../net/ethernet/netswift/txgbe/txgbe_param.c | 1191 +++ .../net/ethernet/netswift/txgbe/txgbe_phy.c | 1014 +++ .../net/ethernet/netswift/txgbe/txgbe_phy.h | 190 + .../net/ethernet/netswift/txgbe/txgbe_ptp.c | 884 ++ .../net/ethernet/netswift/txgbe/txgbe_type.h | 3213 +++++++ drivers/pci/hotplug/rpadlpar_sysfs.c | 14 +- drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 1 + drivers/staging/gmjstcm/Kconfig | 21 + drivers/staging/gmjstcm/Makefile | 3 + drivers/staging/gmjstcm/tcm.c | 949 ++ drivers/staging/gmjstcm/tcm.h | 122 + drivers/staging/gmjstcm/tcm_tis_spi.c | 847 ++ .../staging/rtl8188eu/os_dep/ioctl_linux.c | 6 +- fs/btrfs/ctree.c | 2 + include/linux/cgroup_subsys.h | 2 + kernel/bpf/verifier.c | 20 +- kernel/cgroup/cgroup.c | 6 + mm/vmscan.c | 1 - net/x25/af_x25.c | 6 +- 46 files changed, 33942 insertions(+), 32 deletions(-) create mode 100644 drivers/net/ethernet/netswift/Kconfig create mode 100644 drivers/net/ethernet/netswift/Makefile create mode 100644 drivers/net/ethernet/netswift/txgbe/Kconfig create mode 100644 drivers/net/ethernet/netswift/txgbe/Makefile create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_lib.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_main.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_param.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_type.h create mode 100644 drivers/staging/gmjstcm/Kconfig create mode 100644 drivers/staging/gmjstcm/Makefile create mode 100644 drivers/staging/gmjstcm/tcm.c create mode 100644 drivers/staging/gmjstcm/tcm.h create mode 100644 drivers/staging/gmjstcm/tcm_tis_spi.c
From: zhenpengzheng zhenpengzheng@net-swift.com
driver inclusion category: feature bugzilla: 50777 CVE: NA
------------------------------------------------------------------------- This driver is based on drivers/net/ethernet/intel/ixgbe/.
Signed-off-by: zhenpengzheng zhenpengzheng@net-swift.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/netswift/Kconfig | 20 + drivers/net/ethernet/netswift/Makefile | 6 + drivers/net/ethernet/netswift/txgbe/Kconfig | 13 + drivers/net/ethernet/netswift/txgbe/Makefile | 11 + drivers/net/ethernet/netswift/txgbe/txgbe.h | 1260 +++ .../net/ethernet/netswift/txgbe/txgbe_bp.c | 875 ++ .../net/ethernet/netswift/txgbe/txgbe_bp.h | 41 + .../net/ethernet/netswift/txgbe/txgbe_dcb.h | 30 + .../ethernet/netswift/txgbe/txgbe_ethtool.c | 3381 +++++++ .../net/ethernet/netswift/txgbe/txgbe_hw.c | 7072 +++++++++++++++ .../net/ethernet/netswift/txgbe/txgbe_hw.h | 264 + .../net/ethernet/netswift/txgbe/txgbe_lib.c | 959 ++ .../net/ethernet/netswift/txgbe/txgbe_main.c | 8045 +++++++++++++++++ .../net/ethernet/netswift/txgbe/txgbe_mbx.c | 399 + .../net/ethernet/netswift/txgbe/txgbe_mbx.h | 171 + .../net/ethernet/netswift/txgbe/txgbe_mtd.c | 1366 +++ .../net/ethernet/netswift/txgbe/txgbe_mtd.h | 1540 ++++ .../net/ethernet/netswift/txgbe/txgbe_param.c | 1191 +++ .../net/ethernet/netswift/txgbe/txgbe_phy.c | 1014 +++ .../net/ethernet/netswift/txgbe/txgbe_phy.h | 190 + .../net/ethernet/netswift/txgbe/txgbe_ptp.c | 884 ++ .../net/ethernet/netswift/txgbe/txgbe_type.h | 3213 +++++++ 24 files changed, 31947 insertions(+) create mode 100644 drivers/net/ethernet/netswift/Kconfig create mode 100644 drivers/net/ethernet/netswift/Makefile create mode 100644 drivers/net/ethernet/netswift/txgbe/Kconfig create mode 100644 drivers/net/ethernet/netswift/txgbe/Makefile create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_lib.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_main.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_param.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.h create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_type.h
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 6fde68aa13a4..208c2cee14d6 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig" source "drivers/net/ethernet/ibm/Kconfig" source "drivers/net/ethernet/intel/Kconfig" source "drivers/net/ethernet/xscale/Kconfig" +source "drivers/net/ethernet/netswift/Kconfig"
config JME tristate "JMicron(R) PCI-Express Gigabit Ethernet support" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index b45d5f626b59..bd2235ac6a97 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -95,3 +95,4 @@ obj-$(CONFIG_NET_VENDOR_WIZNET) += wiznet/ obj-$(CONFIG_NET_VENDOR_XILINX) += xilinx/ obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/ obj-$(CONFIG_NET_VENDOR_SYNOPSYS) += synopsys/ +obj-$(CONFIG_NET_VENDOR_NETSWIFT) += netswift/ diff --git a/drivers/net/ethernet/netswift/Kconfig b/drivers/net/ethernet/netswift/Kconfig new file mode 100644 index 000000000000..c4b510b659ae --- /dev/null +++ b/drivers/net/ethernet/netswift/Kconfig @@ -0,0 +1,20 @@ +# +# Netswift network device configuration +# + +config NET_VENDOR_NETSWIFT + bool "netswift devices" + default y + ---help--- + If you have a network (Ethernet) card belonging to this class, say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about Netswift NICs. If you say Y, you will be asked for + your specific card in the following questions. + +if NET_VENDOR_NETSWIFT + +source "drivers/net/ethernet/netswift/txgbe/Kconfig" + +endif # NET_VENDOR_NETSWIFT diff --git a/drivers/net/ethernet/netswift/Makefile b/drivers/net/ethernet/netswift/Makefile new file mode 100644 index 000000000000..0845d08600be --- /dev/null +++ b/drivers/net/ethernet/netswift/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for the Netswift network device drivers. +# + +obj-$(CONFIG_TXGBE) += txgbe/ diff --git a/drivers/net/ethernet/netswift/txgbe/Kconfig b/drivers/net/ethernet/netswift/txgbe/Kconfig new file mode 100644 index 000000000000..5aba1985d83f --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/Kconfig @@ -0,0 +1,13 @@ +# +# Netswift driver configuration +# + +config TXGBE + tristate "Netswift 10G Network Interface Card" + default n + depends on PCI_MSI && NUMA && PCI_IOV && DCB + ---help--- + This driver supports Netswift 10G Ethernet cards. + To compile this driver as part of the kernel, choose Y here. + If unsure, choose N. + The default is N. diff --git a/drivers/net/ethernet/netswift/txgbe/Makefile b/drivers/net/ethernet/netswift/txgbe/Makefile new file mode 100644 index 000000000000..f8531f3356a8 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. +# +# Makefile for the Netswift 10GbE PCI Express ethernet driver +# + +obj-$(CONFIG_TXGBE) += txgbe.o + +txgbe-objs := txgbe_main.o txgbe_ethtool.o \ + txgbe_hw.o txgbe_phy.o txgbe_bp.o \ + txgbe_mbx.o txgbe_mtd.o txgbe_param.o txgbe_lib.o txgbe_ptp.o diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe.h b/drivers/net/ethernet/netswift/txgbe/txgbe.h new file mode 100644 index 000000000000..40bb86dbf3ae --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe.h @@ -0,0 +1,1260 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + + +#ifndef _TXGBE_H_ +#define _TXGBE_H_ + +#include <net/ip.h> +#include <linux/pci.h> +#include <linux/vmalloc.h> +#include <linux/ethtool.h> +#include <linux/if_vlan.h> +#include <net/busy_poll.h> +#include <linux/sctp.h> + +#include <linux/timecounter.h> +#include <linux/clocksource.h> +#include <linux/net_tstamp.h> +#include <linux/ptp_clock_kernel.h> +#include <linux/aer.h> + +#include "txgbe_type.h" + +#ifndef KR_POLLING +#define KR_POLLING 0 +#endif + +#ifndef KR_MODE +#define KR_MODE 0 +#endif + +#ifndef AUTO +#define AUTO 1 +#endif + +#ifndef DEFAULT_FCPAUSE +#define DEFAULT_FCPAUSE 0xFFFF /* kylinft/kylinlx : 0x3FFF default to 0xFFFF*/ +#endif + +#ifndef MAX_REQUEST_SIZE +#define MAX_REQUEST_SIZE 256 /* kylinft : 512 default to 256*/ +#endif + +#ifndef DEFAULT_TXD +#define DEFAULT_TXD 512 /*deepinsw : 1024 default to 512*/ +#endif + +#ifndef DEFAULT_TX_WORK +#define DEFAULT_TX_WORK 256 /*deepinsw : 512 default to 256*/ +#endif + +#ifndef CL72_KRTR_PRBS_MODE_EN +#define CL72_KRTR_PRBS_MODE_EN 0x2fff /*deepinsw : 512 default to 256*/ +#endif + +#ifndef SFI_SET +#define SFI_SET 0 +#define SFI_MAIN 24 +#define SFI_PRE 4 +#define SFI_POST 16 +#endif + +#ifndef KR_SET +#define KR_SET 0 +#define KR_MAIN 27 +#define KR_PRE 8 +#define KR_POST 44 +#endif + +#ifndef KX4_SET +#define KX4_SET 0 +#define KX4_MAIN 40 +#define KX4_PRE 0 +#define KX4_POST 0 +#endif + +#ifndef KX_SET +#define KX_SET 0 +#define KX_MAIN 24 +#define KX_PRE 4 +#define KX_POST 16 +#endif + + +#ifndef KX4_TXRX_PIN +#define KX4_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */ +#endif +#ifndef KR_TXRX_PIN +#define KR_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */ +#endif +#ifndef SFI_TXRX_PIN +#define SFI_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */ +#endif + +#ifndef KX_SGMII +#define KX_SGMII 0 /* 1 0x18090 :0xcf00 */ +#endif + +#ifndef KR_NORESET +#define KR_NORESET 0 +#endif + +#ifndef KR_CL72_TRAINING +#define KR_CL72_TRAINING 1 +#endif + +#ifndef KR_REINITED +#define KR_REINITED 1 +#endif + +#ifndef KR_AN73_PRESET +#define KR_AN73_PRESET 1 +#endif + +#ifndef BOND_CHECK_LINK_MODE +#define BOND_CHECK_LINK_MODE 0 +#endif + +/* Ether Types */ +#define TXGBE_ETH_P_LLDP 0x88CC +#define TXGBE_ETH_P_CNM 0x22E7 + +/* TX/RX descriptor defines */ +#if defined(DEFAULT_TXD) || defined(DEFAULT_TX_WORK) +#define TXGBE_DEFAULT_TXD DEFAULT_TXD +#define TXGBE_DEFAULT_TX_WORK DEFAULT_TX_WORK +#else +#define TXGBE_DEFAULT_TXD 512 +#define TXGBE_DEFAULT_TX_WORK 256 +#endif +#define TXGBE_MAX_TXD 8192 +#define TXGBE_MIN_TXD 128 + +#if (PAGE_SIZE < 8192) +#define TXGBE_DEFAULT_RXD 512 +#define TXGBE_DEFAULT_RX_WORK 256 +#else +#define TXGBE_DEFAULT_RXD 256 +#define TXGBE_DEFAULT_RX_WORK 128 +#endif + +#define TXGBE_MAX_RXD 8192 +#define TXGBE_MIN_RXD 128 + +#define TXGBE_ETH_P_LLDP 0x88CC + +/* flow control */ +#define TXGBE_MIN_FCRTL 0x40 +#define TXGBE_MAX_FCRTL 0x7FF80 +#define TXGBE_MIN_FCRTH 0x600 +#define TXGBE_MAX_FCRTH 0x7FFF0 +#if defined(DEFAULT_FCPAUSE) +#define TXGBE_DEFAULT_FCPAUSE DEFAULT_FCPAUSE /*0x3800*/ +#else +#define TXGBE_DEFAULT_FCPAUSE 0xFFFF +#endif +#define TXGBE_MIN_FCPAUSE 0 +#define TXGBE_MAX_FCPAUSE 0xFFFF + +/* Supported Rx Buffer Sizes */ +#define TXGBE_RXBUFFER_256 256 /* Used for skb receive header */ +#define TXGBE_RXBUFFER_2K 2048 +#define TXGBE_RXBUFFER_3K 3072 +#define TXGBE_RXBUFFER_4K 4096 +#define TXGBE_MAX_RXBUFFER 16384 /* largest size for single descriptor */ + +#define TXGBE_BP_M_NULL 0 +#define TXGBE_BP_M_SFI 1 +#define TXGBE_BP_M_KR 2 +#define TXGBE_BP_M_KX4 3 +#define TXGBE_BP_M_KX 4 +#define TXGBE_BP_M_NAUTO 0 +#define TXGBE_BP_M_AUTO 1 + +/* + * NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we + * reserve 64 more, and skb_shared_info adds an additional 320 bytes more, + * this adds up to 448 bytes of extra data. + * + * Since netdev_alloc_skb now allocates a page fragment we can use a value + * of 256 and the resultant skb will have a truesize of 960 or less. + */ +#define TXGBE_RX_HDR_SIZE TXGBE_RXBUFFER_256 + +#define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN) + +/* How many Rx Buffers do we bundle into one write to the hardware ? */ +#define TXGBE_RX_BUFFER_WRITE 16 /* Must be power of 2 */ +#define TXGBE_RX_DMA_ATTR \ + (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) + +/* assume the kernel supports 8021p to avoid stripping vlan tags */ +#ifndef HAVE_8021P_SUPPORT +#define HAVE_8021P_SUPPORT +#endif + +enum txgbe_tx_flags { + /* cmd_type flags */ + TXGBE_TX_FLAGS_HW_VLAN = 0x01, + TXGBE_TX_FLAGS_TSO = 0x02, + TXGBE_TX_FLAGS_TSTAMP = 0x04, + + /* olinfo flags */ + TXGBE_TX_FLAGS_CC = 0x08, + TXGBE_TX_FLAGS_IPV4 = 0x10, + TXGBE_TX_FLAGS_CSUM = 0x20, + TXGBE_TX_FLAGS_OUTER_IPV4 = 0x100, + TXGBE_TX_FLAGS_LINKSEC = 0x200, + TXGBE_TX_FLAGS_IPSEC = 0x400, + + /* software defined flags */ + TXGBE_TX_FLAGS_SW_VLAN = 0x40, + TXGBE_TX_FLAGS_FCOE = 0x80, +}; + +/* VLAN info */ +#define TXGBE_TX_FLAGS_VLAN_MASK 0xffff0000 +#define TXGBE_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000 +#define TXGBE_TX_FLAGS_VLAN_PRIO_SHIFT 29 +#define TXGBE_TX_FLAGS_VLAN_SHIFT 16 + +#define TXGBE_MAX_RX_DESC_POLL 10 + +#define TXGBE_MAX_VF_MC_ENTRIES 30 +#define TXGBE_MAX_VF_FUNCTIONS 64 +#define MAX_EMULATION_MAC_ADDRS 16 +#define TXGBE_MAX_PF_MACVLANS 15 +#define TXGBE_VF_DEVICE_ID 0x1000 + +/* must account for pools assigned to VFs. */ +#define VMDQ_P(p) (p) + + +#define UPDATE_VF_COUNTER_32bit(reg, last_counter, counter) \ + { \ + u32 current_counter = rd32(hw, reg); \ + if (current_counter < last_counter) \ + counter += 0x100000000LL; \ + last_counter = current_counter; \ + counter &= 0xFFFFFFFF00000000LL; \ + counter |= current_counter; \ + } + +#define UPDATE_VF_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \ + { \ + u64 current_counter_lsb = rd32(hw, reg_lsb); \ + u64 current_counter_msb = rd32(hw, reg_msb); \ + u64 current_counter = (current_counter_msb << 32) | \ + current_counter_lsb; \ + if (current_counter < last_counter) \ + counter += 0x1000000000LL; \ + last_counter = current_counter; \ + counter &= 0xFFFFFFF000000000LL; \ + counter |= current_counter; \ + } + +struct vf_stats { + u64 gprc; + u64 gorc; + u64 gptc; + u64 gotc; + u64 mprc; +}; + +struct vf_data_storage { + struct pci_dev *vfdev; + u8 __iomem *b4_addr; + u32 b4_buf[16]; + unsigned char vf_mac_addresses[ETH_ALEN]; + u16 vf_mc_hashes[TXGBE_MAX_VF_MC_ENTRIES]; + u16 num_vf_mc_hashes; + u16 default_vf_vlan_id; + u16 vlans_enabled; + bool clear_to_send; + struct vf_stats vfstats; + struct vf_stats last_vfstats; + struct vf_stats saved_rst_vfstats; + bool pf_set_mac; + u16 pf_vlan; /* When set, guest VLAN config not allowed. */ + u16 pf_qos; + u16 min_tx_rate; + u16 max_tx_rate; + u16 vlan_count; + u8 spoofchk_enabled; + u8 trusted; + int xcast_mode; + unsigned int vf_api; +}; + +struct vf_macvlans { + struct list_head l; + int vf; + bool free; + bool is_macvlan; + u8 vf_macvlan[ETH_ALEN]; +}; + +#define TXGBE_MAX_TXD_PWR 14 +#define TXGBE_MAX_DATA_PER_TXD (1 << TXGBE_MAX_TXD_PWR) + +/* Tx Descriptors needed, worst case */ +#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), TXGBE_MAX_DATA_PER_TXD) +#ifndef MAX_SKB_FRAGS +#define DESC_NEEDED 4 +#elif (MAX_SKB_FRAGS < 16) +#define DESC_NEEDED ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4) +#else +#define DESC_NEEDED (MAX_SKB_FRAGS + 4) +#endif + +/* wrapper around a pointer to a socket buffer, + * so a DMA handle can be stored along with the buffer */ +struct txgbe_tx_buffer { + union txgbe_tx_desc *next_to_watch; + unsigned long time_stamp; + struct sk_buff *skb; + unsigned int bytecount; + unsigned short gso_segs; + __be16 protocol; + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + u32 tx_flags; +}; + +struct txgbe_rx_buffer { + struct sk_buff *skb; + dma_addr_t dma; + dma_addr_t page_dma; + struct page *page; + unsigned int page_offset; +}; + +struct txgbe_queue_stats { + u64 packets; + u64 bytes; +#ifdef BP_EXTENDED_STATS + u64 yields; + u64 misses; + u64 cleaned; +#endif /* BP_EXTENDED_STATS */ +}; + +struct txgbe_tx_queue_stats { + u64 restart_queue; + u64 tx_busy; + u64 tx_done_old; +}; + +struct txgbe_rx_queue_stats { + u64 rsc_count; + u64 rsc_flush; + u64 non_eop_descs; + u64 alloc_rx_page_failed; + u64 alloc_rx_buff_failed; + u64 csum_good_cnt; + u64 csum_err; +}; + +#define TXGBE_TS_HDR_LEN 8 +enum txgbe_ring_state_t { + __TXGBE_RX_3K_BUFFER, + __TXGBE_RX_BUILD_SKB_ENABLED, + __TXGBE_TX_FDIR_INIT_DONE, + __TXGBE_TX_XPS_INIT_DONE, + __TXGBE_TX_DETECT_HANG, + __TXGBE_HANG_CHECK_ARMED, + __TXGBE_RX_HS_ENABLED, + __TXGBE_RX_RSC_ENABLED, +}; + +struct txgbe_fwd_adapter { + unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; + struct net_device *vdev; + struct txgbe_adapter *adapter; + unsigned int tx_base_queue; + unsigned int rx_base_queue; + int index; /* pool index on PF */ +}; + +#define ring_uses_build_skb(ring) \ + test_bit(__TXGBE_RX_BUILD_SKB_ENABLED, &(ring)->state) + +#define ring_is_hs_enabled(ring) \ + test_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state) +#define set_ring_hs_enabled(ring) \ + set_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state) +#define clear_ring_hs_enabled(ring) \ + clear_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state) +#define check_for_tx_hang(ring) \ + test_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state) +#define set_check_for_tx_hang(ring) \ + set_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state) +#define clear_check_for_tx_hang(ring) \ + clear_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state) +#define ring_is_rsc_enabled(ring) \ + test_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state) +#define set_ring_rsc_enabled(ring) \ + set_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state) +#define clear_ring_rsc_enabled(ring) \ + clear_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state) + +struct txgbe_ring { + struct txgbe_ring *next; /* pointer to next ring in q_vector */ + struct txgbe_q_vector *q_vector; /* backpointer to host q_vector */ + struct net_device *netdev; /* netdev ring belongs to */ + struct device *dev; /* device for DMA mapping */ + struct txgbe_fwd_adapter *accel; + void *desc; /* descriptor ring memory */ + union { + struct txgbe_tx_buffer *tx_buffer_info; + struct txgbe_rx_buffer *rx_buffer_info; + }; + unsigned long state; + u8 __iomem *tail; + dma_addr_t dma; /* phys. address of descriptor ring */ + unsigned int size; /* length in bytes */ + + u16 count; /* amount of descriptors */ + + u8 queue_index; /* needed for multiqueue queue management */ + u8 reg_idx; /* holds the special value that gets + * the hardware register offset + * associated with this ring, which is + * different for DCB and RSS modes + */ + u16 next_to_use; + u16 next_to_clean; + unsigned long last_rx_timestamp; + u16 rx_buf_len; + union { + u16 next_to_alloc; + struct { + u8 atr_sample_rate; + u8 atr_count; + }; + }; + + u8 dcb_tc; + struct txgbe_queue_stats stats; + struct u64_stats_sync syncp; + + union { + struct txgbe_tx_queue_stats tx_stats; + struct txgbe_rx_queue_stats rx_stats; + }; +} ____cacheline_internodealigned_in_smp; + +enum txgbe_ring_f_enum { + RING_F_NONE = 0, + RING_F_VMDQ, /* SR-IOV uses the same ring feature */ + RING_F_RSS, + RING_F_FDIR, + RING_F_ARRAY_SIZE /* must be last in enum set */ +}; + +#define TXGBE_MAX_DCB_INDICES 8 +#define TXGBE_MAX_RSS_INDICES 63 +#define TXGBE_MAX_VMDQ_INDICES 64 +#define TXGBE_MAX_FDIR_INDICES 63 + +#define MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) +#define MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1) + +#define TXGBE_MAX_L2A_QUEUES 4 +#define TXGBE_BAD_L2A_QUEUE 3 + +#define TXGBE_MAX_MACVLANS 32 +#define TXGBE_MAX_DCBMACVLANS 8 + +struct txgbe_ring_feature { + u16 limit; /* upper limit on feature indices */ + u16 indices; /* current value of indices */ + u16 mask; /* Mask used for feature to ring mapping */ + u16 offset; /* offset to start of feature */ +}; + +#define TXGBE_VMDQ_8Q_MASK 0x78 +#define TXGBE_VMDQ_4Q_MASK 0x7C +#define TXGBE_VMDQ_2Q_MASK 0x7E + +/* + * FCoE requires that all Rx buffers be over 2200 bytes in length. Since + * this is twice the size of a half page we need to double the page order + * for FCoE enabled Rx queues. + */ +static inline unsigned int txgbe_rx_bufsz(struct txgbe_ring __maybe_unused *ring) +{ +#if MAX_SKB_FRAGS < 8 + return ALIGN(TXGBE_MAX_RXBUFFER / MAX_SKB_FRAGS, 1024); +#else + return TXGBE_RXBUFFER_2K; +#endif +} + +static inline unsigned int txgbe_rx_pg_order(struct txgbe_ring __maybe_unused *ring) +{ + return 0; +} +#define txgbe_rx_pg_size(_ring) (PAGE_SIZE << txgbe_rx_pg_order(_ring)) + +struct txgbe_ring_container { + struct txgbe_ring *ring; /* pointer to linked list of rings */ + unsigned int total_bytes; /* total bytes processed this int */ + unsigned int total_packets; /* total packets processed this int */ + u16 work_limit; /* total work allowed per interrupt */ + u8 count; /* total number of rings in vector */ + u8 itr; /* current ITR setting for ring */ +}; + +/* iterator for handling rings in ring container */ +#define txgbe_for_each_ring(pos, head) \ + for (pos = (head).ring; pos != NULL; pos = pos->next) + +#define MAX_RX_PACKET_BUFFERS ((adapter->flags & TXGBE_FLAG_DCB_ENABLED) \ + ? 8 : 1) +#define MAX_TX_PACKET_BUFFERS MAX_RX_PACKET_BUFFERS + +/* MAX_MSIX_Q_VECTORS of these are allocated, + * but we only use one per queue-specific vector. + */ +struct txgbe_q_vector { + struct txgbe_adapter *adapter; + int cpu; /* CPU for DCA */ + u16 v_idx; /* index of q_vector within array, also used for + * finding the bit in EICR and friends that + * represents the vector for this ring */ + u16 itr; /* Interrupt throttle rate written to EITR */ + struct txgbe_ring_container rx, tx; + + struct napi_struct napi; + cpumask_t affinity_mask; + int numa_node; + struct rcu_head rcu; /* to avoid race with update stats on free */ + char name[IFNAMSIZ + 17]; + bool netpoll_rx; + + /* for dynamic allocation of rings associated with this q_vector */ + struct txgbe_ring ring[0] ____cacheline_internodealigned_in_smp; +}; + +/* + * microsecond values for various ITR rates shifted by 2 to fit itr register + * with the first 3 bits reserved 0 + */ +#define TXGBE_MIN_RSC_ITR 24 +#define TXGBE_100K_ITR 40 +#define TXGBE_20K_ITR 200 +#define TXGBE_16K_ITR 248 +#define TXGBE_12K_ITR 336 + +/* txgbe_test_staterr - tests bits in Rx descriptor status and error fields */ +static inline __le32 txgbe_test_staterr(union txgbe_rx_desc *rx_desc, + const u32 stat_err_bits) +{ + return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits); +} + +/* txgbe_desc_unused - calculate if we have unused descriptors */ +static inline u16 txgbe_desc_unused(struct txgbe_ring *ring) +{ + u16 ntc = ring->next_to_clean; + u16 ntu = ring->next_to_use; + + return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1; +} + +#define TXGBE_RX_DESC(R, i) \ + (&(((union txgbe_rx_desc *)((R)->desc))[i])) +#define TXGBE_TX_DESC(R, i) \ + (&(((union txgbe_tx_desc *)((R)->desc))[i])) +#define TXGBE_TX_CTXTDESC(R, i) \ + (&(((struct txgbe_tx_context_desc *)((R)->desc))[i])) + +#define TXGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */ + +#define TCP_TIMER_VECTOR 0 +#define OTHER_VECTOR 1 +#define NON_Q_VECTORS (OTHER_VECTOR + TCP_TIMER_VECTOR) + +#define TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE 64 + +struct txgbe_mac_addr { + u8 addr[ETH_ALEN]; + u16 state; /* bitmask */ + u64 pools; +}; + +#define TXGBE_MAC_STATE_DEFAULT 0x1 +#define TXGBE_MAC_STATE_MODIFIED 0x2 +#define TXGBE_MAC_STATE_IN_USE 0x4 + +/* + * Only for array allocations in our adapter struct. + * we can actually assign 64 queue vectors based on our extended-extended + * interrupt registers. + */ +#define MAX_MSIX_Q_VECTORS TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE +#define MAX_MSIX_COUNT TXGBE_MAX_MSIX_VECTORS_SAPPHIRE + +#define MIN_MSIX_Q_VECTORS 1 +#define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NON_Q_VECTORS) + +/* default to trying for four seconds */ +#define TXGBE_TRY_LINK_TIMEOUT (4 * HZ) +#define TXGBE_SFP_POLL_JIFFIES (2 * HZ) /* SFP poll every 2 seconds */ + +/** + * txgbe_adapter.flag + **/ +#define TXGBE_FLAG_MSI_CAPABLE (u32)(1 << 0) +#define TXGBE_FLAG_MSI_ENABLED (u32)(1 << 1) +#define TXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 2) +#define TXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3) +#define TXGBE_FLAG_LLI_PUSH (u32)(1 << 4) + +#define TXGBE_FLAG_TPH_ENABLED (u32)(1 << 6) +#define TXGBE_FLAG_TPH_CAPABLE (u32)(1 << 7) +#define TXGBE_FLAG_TPH_ENABLED_DATA (u32)(1 << 8) + +#define TXGBE_FLAG_MQ_CAPABLE (u32)(1 << 9) +#define TXGBE_FLAG_DCB_ENABLED (u32)(1 << 10) +#define TXGBE_FLAG_VMDQ_ENABLED (u32)(1 << 11) +#define TXGBE_FLAG_FAN_FAIL_CAPABLE (u32)(1 << 12) +#define TXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 13) +#define TXGBE_FLAG_NEED_LINK_CONFIG (u32)(1 << 14) +#define TXGBE_FLAG_FDIR_HASH_CAPABLE (u32)(1 << 15) +#define TXGBE_FLAG_FDIR_PERFECT_CAPABLE (u32)(1 << 16) +#define TXGBE_FLAG_SRIOV_CAPABLE (u32)(1 << 19) +#define TXGBE_FLAG_SRIOV_ENABLED (u32)(1 << 20) +#define TXGBE_FLAG_SRIOV_REPLICATION_ENABLE (u32)(1 << 21) +#define TXGBE_FLAG_SRIOV_L2SWITCH_ENABLE (u32)(1 << 22) +#define TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE (u32)(1 << 23) +#define TXGBE_FLAG_RX_HWTSTAMP_ENABLED (u32)(1 << 24) +#define TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE (u32)(1 << 25) +#define TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE (u32)(1 << 26) +#define TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER (u32)(1 << 27) +#define TXGBE_FLAG_NEED_ETH_PHY_RESET (u32)(1 << 28) +#define TXGBE_FLAG_RX_HS_ENABLED (u32)(1 << 30) +#define TXGBE_FLAG_LINKSEC_ENABLED (u32)(1 << 31) +#define TXGBE_FLAG_IPSEC_ENABLED (u32)(1 << 5) + +/* preset defaults */ +#define TXGBE_FLAGS_SP_INIT (TXGBE_FLAG_MSI_CAPABLE \ + | TXGBE_FLAG_MSIX_CAPABLE \ + | TXGBE_FLAG_MQ_CAPABLE \ + | TXGBE_FLAG_SRIOV_CAPABLE) + +/** + * txgbe_adapter.flag2 + **/ +#define TXGBE_FLAG2_RSC_CAPABLE (1U << 0) +#define TXGBE_FLAG2_RSC_ENABLED (1U << 1) +#define TXGBE_FLAG2_TEMP_SENSOR_CAPABLE (1U << 3) +#define TXGBE_FLAG2_TEMP_SENSOR_EVENT (1U << 4) +#define TXGBE_FLAG2_SEARCH_FOR_SFP (1U << 5) +#define TXGBE_FLAG2_SFP_NEEDS_RESET (1U << 6) +#define TXGBE_FLAG2_PF_RESET_REQUESTED (1U << 7) +#define TXGBE_FLAG2_FDIR_REQUIRES_REINIT (1U << 8) +#define TXGBE_FLAG2_RSS_FIELD_IPV4_UDP (1U << 9) +#define TXGBE_FLAG2_RSS_FIELD_IPV6_UDP (1U << 10) +#define TXGBE_FLAG2_RSS_ENABLED (1U << 12) +#define TXGBE_FLAG2_PTP_PPS_ENABLED (1U << 11) +#define TXGBE_FLAG2_EEE_CAPABLE (1U << 14) +#define TXGBE_FLAG2_EEE_ENABLED (1U << 15) +#define TXGBE_FLAG2_VXLAN_REREG_NEEDED (1U << 16) +#define TXGBE_FLAG2_DEV_RESET_REQUESTED (1U << 18) +#define TXGBE_FLAG2_RESET_INTR_RECEIVED (1U << 19) +#define TXGBE_FLAG2_GLOBAL_RESET_REQUESTED (1U << 20) +#define TXGBE_FLAG2_CLOUD_SWITCH_ENABLED (1U << 21) +#define TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED (1U << 22) +#define KR (1U << 23) +#define TXGBE_FLAG2_KR_TRAINING (1U << 24) +#define TXGBE_FLAG2_KR_AUTO (1U << 25) +#define TXGBE_FLAG2_LINK_DOWN (1U << 26) +#define TXGBE_FLAG2_KR_PRO_DOWN (1U << 27) +#define TXGBE_FLAG2_KR_PRO_REINIT (1U << 28) +#define TXGBE_FLAG2_PCIE_NEED_RECOVER (1U << 31) + + +#define TXGBE_SET_FLAG(_input, _flag, _result) \ + ((_flag <= _result) ? \ + ((u32)(_input & _flag) * (_result / _flag)) : \ + ((u32)(_input & _flag) / (_flag / _result))) + +enum txgbe_isb_idx { + TXGBE_ISB_HEADER, + TXGBE_ISB_MISC, + TXGBE_ISB_VEC0, + TXGBE_ISB_VEC1, + TXGBE_ISB_MAX +}; + +/* board specific private data structure */ +struct txgbe_adapter { + unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; + /* OS defined structs */ + struct net_device *netdev; + struct pci_dev *pdev; + + unsigned long state; + + /* Some features need tri-state capability, + * thus the additional *_CAPABLE flags. + */ + u32 flags; + u32 flags2; + u32 vf_mode; + u32 backplane_an; + u32 an73; + u32 an37; + u32 ffe_main; + u32 ffe_pre; + u32 ffe_post; + u32 ffe_set; + u32 backplane_mode; + u32 backplane_auto; + + bool cloud_mode; + + /* Tx fast path data */ + int num_tx_queues; + u16 tx_itr_setting; + u16 tx_work_limit; + + /* Rx fast path data */ + int num_rx_queues; + u16 rx_itr_setting; + u16 rx_work_limit; + + unsigned int num_vmdqs; /* does not include pools assigned to VFs */ + unsigned int queues_per_pool; + + /* TX */ + struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp; + + u64 restart_queue; + u64 lsc_int; + u32 tx_timeout_count; + + /* RX */ + struct txgbe_ring *rx_ring[MAX_RX_QUEUES]; + u64 hw_csum_rx_error; + u64 hw_csum_rx_good; + u64 hw_rx_no_dma_resources; + u64 rsc_total_count; + u64 rsc_total_flush; + u64 non_eop_descs; + u32 alloc_rx_page_failed; + u32 alloc_rx_buff_failed; + + struct txgbe_q_vector *q_vector[MAX_MSIX_Q_VECTORS]; + + u8 dcb_set_bitmap; + u8 dcbx_cap; + enum txgbe_fc_mode last_lfc_mode; + + int num_q_vectors; /* current number of q_vectors for device */ + int max_q_vectors; /* upper limit of q_vectors for device */ + struct txgbe_ring_feature ring_feature[RING_F_ARRAY_SIZE]; + struct msix_entry *msix_entries; + + u64 test_icr; + struct txgbe_ring test_tx_ring; + struct txgbe_ring test_rx_ring; + + /* structs defined in txgbe_hw.h */ + struct txgbe_hw hw; + u16 msg_enable; + struct txgbe_hw_stats stats; + u32 lli_port; + u32 lli_size; + u32 lli_etype; + u32 lli_vlan_pri; + + u32 *config_space; + u64 tx_busy; + unsigned int tx_ring_count; + unsigned int rx_ring_count; + + u32 link_speed; + bool link_up; + unsigned long sfp_poll_time; + unsigned long link_check_timeout; + + struct timer_list service_timer; + struct work_struct service_task; + struct hlist_head fdir_filter_list; + unsigned long fdir_overflow; /* number of times ATR was backed off */ + union txgbe_atr_input fdir_mask; + int fdir_filter_count; + u32 fdir_pballoc; + u32 atr_sample_rate; + spinlock_t fdir_perfect_lock; + + u8 __iomem *io_addr; /* Mainly for iounmap use */ + u32 wol; + + u16 bd_number; + u16 bridge_mode; + + char eeprom_id[32]; + u16 eeprom_cap; + bool netdev_registered; + u32 interrupt_event; + u32 led_reg; + + struct ptp_clock *ptp_clock; + struct ptp_clock_info ptp_caps; + struct work_struct ptp_tx_work; + struct sk_buff *ptp_tx_skb; + struct hwtstamp_config tstamp_config; + unsigned long ptp_tx_start; + unsigned long last_overflow_check; + unsigned long last_rx_ptp_check; + spinlock_t tmreg_lock; + struct cyclecounter hw_cc; + struct timecounter hw_tc; + u32 base_incval; + u32 tx_hwtstamp_timeouts; + u32 tx_hwtstamp_skipped; + u32 rx_hwtstamp_cleared; + void (*ptp_setup_sdp) (struct txgbe_adapter *); + + DECLARE_BITMAP(active_vfs, TXGBE_MAX_VF_FUNCTIONS); + unsigned int num_vfs; + struct vf_data_storage *vfinfo; + struct vf_macvlans vf_mvs; + struct vf_macvlans *mv_list; + struct txgbe_mac_addr *mac_table; + + __le16 vxlan_port; + __le16 geneve_port; + + u8 default_up; + + unsigned long fwd_bitmask; /* bitmask indicating in use pools */ + unsigned long tx_timeout_last_recovery; + u32 tx_timeout_recovery_level; + +#define TXGBE_MAX_RETA_ENTRIES 128 + u8 rss_indir_tbl[TXGBE_MAX_RETA_ENTRIES]; +#define TXGBE_RSS_KEY_SIZE 40 + u32 rss_key[TXGBE_RSS_KEY_SIZE / sizeof(u32)]; + + void *ipsec; + + /* misc interrupt status block */ + dma_addr_t isb_dma; + u32 *isb_mem; + u32 isb_tag[TXGBE_ISB_MAX]; +}; + +static inline u32 txgbe_misc_isb(struct txgbe_adapter *adapter, + enum txgbe_isb_idx idx) +{ + u32 cur_tag = 0; + u32 cur_diff = 0; + + cur_tag = adapter->isb_mem[TXGBE_ISB_HEADER]; + cur_diff = cur_tag - adapter->isb_tag[idx]; + + adapter->isb_tag[idx] = cur_tag; + + return adapter->isb_mem[idx]; +} + +static inline u8 txgbe_max_rss_indices(struct txgbe_adapter *adapter) +{ + return TXGBE_MAX_RSS_INDICES; +} + +struct txgbe_fdir_filter { + struct hlist_node fdir_node; + union txgbe_atr_input filter; + u16 sw_idx; + u16 action; +}; + +enum txgbe_state_t { + __TXGBE_TESTING, + __TXGBE_RESETTING, + __TXGBE_DOWN, + __TXGBE_HANGING, + __TXGBE_DISABLED, + __TXGBE_REMOVING, + __TXGBE_SERVICE_SCHED, + __TXGBE_SERVICE_INITED, + __TXGBE_IN_SFP_INIT, + __TXGBE_PTP_RUNNING, + __TXGBE_PTP_TX_IN_PROGRESS, +}; + +struct txgbe_cb { + dma_addr_t dma; + u16 append_cnt; /* number of skb's appended */ + bool page_released; + bool dma_released; +}; +#define TXGBE_CB(skb) ((struct txgbe_cb *)(skb)->cb) + +/* ESX txgbe CIM IOCTL definition */ + +extern struct dcbnl_rtnl_ops dcbnl_ops; +int txgbe_copy_dcb_cfg(struct txgbe_adapter *adapter, int tc_max); + +u8 txgbe_dcb_txq_to_tc(struct txgbe_adapter *adapter, u8 index); + +/* needed by txgbe_main.c */ +int txgbe_validate_mac_addr(u8 *mc_addr); +void txgbe_check_options(struct txgbe_adapter *adapter); +void txgbe_assign_netdev_ops(struct net_device *netdev); + +/* needed by txgbe_ethtool.c */ +extern char txgbe_driver_name[]; +extern const char txgbe_driver_version[]; + +void txgbe_irq_disable(struct txgbe_adapter *adapter); +void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush); +int txgbe_open(struct net_device *netdev); +int txgbe_close(struct net_device *netdev); +void txgbe_up(struct txgbe_adapter *adapter); +void txgbe_down(struct txgbe_adapter *adapter); +void txgbe_reinit_locked(struct txgbe_adapter *adapter); +void txgbe_reset(struct txgbe_adapter *adapter); +void txgbe_set_ethtool_ops(struct net_device *netdev); +int txgbe_setup_rx_resources(struct txgbe_ring *); +int txgbe_setup_tx_resources(struct txgbe_ring *); +void txgbe_free_rx_resources(struct txgbe_ring *); +void txgbe_free_tx_resources(struct txgbe_ring *); +void txgbe_configure_rx_ring(struct txgbe_adapter *, + struct txgbe_ring *); +void txgbe_configure_tx_ring(struct txgbe_adapter *, + struct txgbe_ring *); +void txgbe_update_stats(struct txgbe_adapter *adapter); +int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter); +void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter); +void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter); +void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter); +bool txgbe_is_txgbe(struct pci_dev *pcidev); +netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *, + struct txgbe_adapter *, + struct txgbe_ring *); +void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *, + struct txgbe_tx_buffer *); +void txgbe_alloc_rx_buffers(struct txgbe_ring *, u16); +void txgbe_configure_rscctl(struct txgbe_adapter *adapter, + struct txgbe_ring *); +void txgbe_clear_rscctl(struct txgbe_adapter *adapter, + struct txgbe_ring *); +void txgbe_clear_vxlan_port(struct txgbe_adapter *); +void txgbe_set_rx_mode(struct net_device *netdev); +int txgbe_write_mc_addr_list(struct net_device *netdev); +int txgbe_setup_tc(struct net_device *dev, u8 tc); +void txgbe_tx_ctxtdesc(struct txgbe_ring *, u32, u32, u32, u32); +void txgbe_do_reset(struct net_device *netdev); +void txgbe_write_eitr(struct txgbe_q_vector *q_vector); +int txgbe_poll(struct napi_struct *napi, int budget); +void txgbe_disable_rx_queue(struct txgbe_adapter *adapter, + struct txgbe_ring *); +void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter); +void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter); + +void txgbe_dump(struct txgbe_adapter *adapter); + +static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring) +{ + return netdev_get_tx_queue(ring->netdev, ring->queue_index); +} + +int txgbe_wol_supported(struct txgbe_adapter *adapter); +int txgbe_get_settings(struct net_device *netdev, + struct ethtool_cmd *ecmd); +int txgbe_write_uc_addr_list(struct net_device *netdev, int pool); +void txgbe_full_sync_mac_table(struct txgbe_adapter *adapter); +int txgbe_add_mac_filter(struct txgbe_adapter *adapter, + u8 *addr, u16 pool); +int txgbe_del_mac_filter(struct txgbe_adapter *adapter, + u8 *addr, u16 pool); +int txgbe_available_rars(struct txgbe_adapter *adapter); +void txgbe_vlan_mode(struct net_device *, u32); + +void txgbe_ptp_init(struct txgbe_adapter *adapter); +void txgbe_ptp_stop(struct txgbe_adapter *adapter); +void txgbe_ptp_suspend(struct txgbe_adapter *adapter); +void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter); +void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter); +void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb); +int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr); +int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr); +void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter); +void txgbe_ptp_reset(struct txgbe_adapter *adapter); +void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter); + +void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter); + +u32 txgbe_rss_indir_tbl_entries(struct txgbe_adapter *adapter); +void txgbe_store_reta(struct txgbe_adapter *adapter); + +/** + * interrupt masking operations. each bit in PX_ICn correspond to a interrupt. + * disable a interrupt by writing to PX_IMS with the corresponding bit=1 + * enable a interrupt by writing to PX_IMC with the corresponding bit=1 + * trigger a interrupt by writing to PX_ICS with the corresponding bit=1 + **/ +#define TXGBE_INTR_ALL (~0ULL) +#define TXGBE_INTR_MISC(A) (1ULL << (A)->num_q_vectors) +#define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1) +#define TXGBE_INTR_Q(i) (1ULL << (i)) +static inline void txgbe_intr_enable(struct txgbe_hw *hw, u64 qmask) +{ + u32 mask; + + mask = (qmask & 0xFFFFFFFF); + if (mask) + wr32(hw, TXGBE_PX_IMC(0), mask); + mask = (qmask >> 32); + if (mask) + wr32(hw, TXGBE_PX_IMC(1), mask); + + /* skip the flush */ +} + +static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask) +{ + u32 mask; + + mask = (qmask & 0xFFFFFFFF); + if (mask) + wr32(hw, TXGBE_PX_IMS(0), mask); + mask = (qmask >> 32); + if (mask) + wr32(hw, TXGBE_PX_IMS(1), mask); + + /* skip the flush */ +} + +static inline void txgbe_intr_trigger(struct txgbe_hw *hw, u64 qmask) +{ + u32 mask; + + mask = (qmask & 0xFFFFFFFF); + if (mask) + wr32(hw, TXGBE_PX_ICS(0), mask); + mask = (qmask >> 32); + if (mask) + wr32(hw, TXGBE_PX_ICS(1), mask); + + /* skip the flush */ +} + +#define TXGBE_RING_SIZE(R) ((R)->count < TXGBE_MAX_TXD ? (R)->count / 128 : 0) + +/* move from txgbe_osdep.h */ +#define TXGBE_CPU_TO_BE16(_x) cpu_to_be16(_x) +#define TXGBE_BE16_TO_CPU(_x) be16_to_cpu(_x) +#define TXGBE_CPU_TO_BE32(_x) cpu_to_be32(_x) +#define TXGBE_BE32_TO_CPU(_x) be32_to_cpu(_x) + +#define msec_delay(_x) msleep(_x) + +#define usec_delay(_x) udelay(_x) + +#define STATIC static + +#define TXGBE_NAME "txgbe" + +#define DPRINTK(nlevel, klevel, fmt, args...) \ + ((void)((NETIF_MSG_##nlevel & adapter->msg_enable) && \ + printk(KERN_##klevel TXGBE_NAME ": %s: %s: " fmt, \ + adapter->netdev->name, \ + __func__, ## args))) + +#ifndef _WIN32 +#define txgbe_emerg(fmt, ...) printk(KERN_EMERG fmt, ## __VA_ARGS__) +#define txgbe_alert(fmt, ...) printk(KERN_ALERT fmt, ## __VA_ARGS__) +#define txgbe_crit(fmt, ...) printk(KERN_CRIT fmt, ## __VA_ARGS__) +#define txgbe_error(fmt, ...) printk(KERN_ERR fmt, ## __VA_ARGS__) +#define txgbe_warn(fmt, ...) printk(KERN_WARNING fmt, ## __VA_ARGS__) +#define txgbe_notice(fmt, ...) printk(KERN_NOTICE fmt, ## __VA_ARGS__) +#define txgbe_info(fmt, ...) printk(KERN_INFO fmt, ## __VA_ARGS__) +#define txgbe_print(fmt, ...) printk(KERN_DEBUG fmt, ## __VA_ARGS__) +#define txgbe_trace(fmt, ...) printk(KERN_INFO fmt, ## __VA_ARGS__) +#else /* _WIN32 */ +#define txgbe_error(lvl, fmt, ...) \ + DbgPrintEx(DPFLTR_IHVNETWORK_ID, DPFLTR_ERROR_LEVEL, \ + "%s-error: %s@%d, " fmt, \ + "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__) +#endif /* !_WIN32 */ + +#ifdef DBG +#ifndef _WIN32 +#define txgbe_debug(fmt, ...) \ + printk(KERN_DEBUG \ + "%s-debug: %s@%d, " fmt, \ + "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__) +#else /* _WIN32 */ +#define txgbe_debug(fmt, ...) \ + DbgPrintEx(DPFLTR_IHVNETWORK_ID, DPFLTR_ERROR_LEVEL, \ + "%s-debug: %s@%d, " fmt, \ + "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__) +#endif /* _WIN32 */ +#else /* DBG */ +#define txgbe_debug(fmt, ...) do {} while (0) +#endif /* DBG */ + + +#ifdef DBG +#define ASSERT(_x) BUG_ON(!(_x)) +#define DEBUGOUT(S) printk(KERN_DEBUG S) +#define DEBUGOUT1(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGOUT2(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGOUT3(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGOUT4(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGOUT5(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGOUT6(S, A...) printk(KERN_DEBUG S, ## A) +#define DEBUGFUNC(fmt, ...) txgbe_debug(fmt, ## __VA_ARGS__) +#else +#define ASSERT(_x) do {} while (0) +#define DEBUGOUT(S) do {} while (0) +#define DEBUGOUT1(S, A...) do {} while (0) +#define DEBUGOUT2(S, A...) do {} while (0) +#define DEBUGOUT3(S, A...) do {} while (0) +#define DEBUGOUT4(S, A...) do {} while (0) +#define DEBUGOUT5(S, A...) do {} while (0) +#define DEBUGOUT6(S, A...) do {} while (0) +#define DEBUGFUNC(fmt, ...) do {} while (0) +#endif + + +struct txgbe_msg { + u16 msg_enable; +}; + +__attribute__((unused)) static struct net_device *txgbe_hw_to_netdev(const struct txgbe_hw *hw) +{ + return ((struct txgbe_adapter *)hw->back)->netdev; +} + +__attribute__((unused)) static struct txgbe_msg *txgbe_hw_to_msg(const struct txgbe_hw *hw) +{ + struct txgbe_adapter *adapter = + container_of(hw, struct txgbe_adapter, hw); + return (struct txgbe_msg *)&adapter->msg_enable; +} + +static inline struct device *pci_dev_to_dev(struct pci_dev *pdev) +{ + return &pdev->dev; +} + +#define hw_dbg(hw, format, arg...) \ + netdev_dbg(txgbe_hw_to_netdev(hw), format, ## arg) +#define hw_err(hw, format, arg...) \ + netdev_err(txgbe_hw_to_netdev(hw), format, ## arg) +#define e_dev_info(format, arg...) \ + dev_info(pci_dev_to_dev(adapter->pdev), format, ## arg) +#define e_dev_warn(format, arg...) \ + dev_warn(pci_dev_to_dev(adapter->pdev), format, ## arg) +#define e_dev_err(format, arg...) \ + dev_err(pci_dev_to_dev(adapter->pdev), format, ## arg) +#define e_dev_notice(format, arg...) \ + dev_notice(pci_dev_to_dev(adapter->pdev), format, ## arg) +#define e_dbg(msglvl, format, arg...) \ + netif_dbg(adapter, msglvl, adapter->netdev, format, ## arg) +#define e_info(msglvl, format, arg...) \ + netif_info(adapter, msglvl, adapter->netdev, format, ## arg) +#define e_err(msglvl, format, arg...) \ + netif_err(adapter, msglvl, adapter->netdev, format, ## arg) +#define e_warn(msglvl, format, arg...) \ + netif_warn(adapter, msglvl, adapter->netdev, format, ## arg) +#define e_crit(msglvl, format, arg...) \ + netif_crit(adapter, msglvl, adapter->netdev, format, ## arg) + +#define TXGBE_FAILED_READ_CFG_DWORD 0xffffffffU +#define TXGBE_FAILED_READ_CFG_WORD 0xffffU +#define TXGBE_FAILED_READ_CFG_BYTE 0xffU + +extern u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet); +extern u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg); +extern void txgbe_write_pci_cfg_word(struct txgbe_hw *hw, u32 reg, u16 value); + +#define TXGBE_R32_Q(h, r) txgbe_read_reg(h, r, true) + +#define TXGBE_EEPROM_GRANT_ATTEMPS 100 +#define TXGBE_HTONL(_i) htonl(_i) +#define TXGBE_NTOHL(_i) ntohl(_i) +#define TXGBE_NTOHS(_i) ntohs(_i) +#define TXGBE_CPU_TO_LE32(_i) cpu_to_le32(_i) +#define TXGBE_LE32_TO_CPUS(_i) le32_to_cpus(_i) + +enum { + TXGBE_ERROR_SOFTWARE, + TXGBE_ERROR_POLLING, + TXGBE_ERROR_INVALID_STATE, + TXGBE_ERROR_UNSUPPORTED, + TXGBE_ERROR_ARGUMENT, + TXGBE_ERROR_CAUTION, +}; + +#define ERROR_REPORT(level, format, arg...) do { \ + switch (level) { \ + case TXGBE_ERROR_SOFTWARE: \ + case TXGBE_ERROR_CAUTION: \ + case TXGBE_ERROR_POLLING: \ + netif_warn(txgbe_hw_to_msg(hw), drv, txgbe_hw_to_netdev(hw), \ + format, ## arg); \ + break; \ + case TXGBE_ERROR_INVALID_STATE: \ + case TXGBE_ERROR_UNSUPPORTED: \ + case TXGBE_ERROR_ARGUMENT: \ + netif_err(txgbe_hw_to_msg(hw), hw, txgbe_hw_to_netdev(hw), \ + format, ## arg); \ + break; \ + default: \ + break; \ + } \ +} while (0) + +#define ERROR_REPORT1 ERROR_REPORT +#define ERROR_REPORT2 ERROR_REPORT +#define ERROR_REPORT3 ERROR_REPORT + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) do { \ + uninitialized_var(_p); \ +} while (0) +#define UNREFERENCED_2PARAMETER(_p, _q) do { \ + uninitialized_var(_p); \ + uninitialized_var(_q); \ +} while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { \ + uninitialized_var(_p); \ + uninitialized_var(_q); \ + uninitialized_var(_r); \ +} while (0) +#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) do { \ + uninitialized_var(_p); \ + uninitialized_var(_q); \ + uninitialized_var(_r); \ + uninitialized_var(_s); \ +} while (0) +#define UNREFERENCED_PARAMETER(_p) UNREFERENCED_1PARAMETER(_p) + +/* end of txgbe_osdep.h */ + +#endif /* _TXGBE_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c new file mode 100644 index 000000000000..68d465da2eee --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c @@ -0,0 +1,875 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + + +#include "txgbe_bp.h" + +int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter); +int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter); +int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, + struct txgbe_adapter *adapter); +int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, + struct txgbe_adapter *adapter); +int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter); +int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter); +int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, + struct txgbe_adapter *adapter); + +void txgbe_bp_close_protect(struct txgbe_adapter *adapter) +{ + adapter->flags2 |= TXGBE_FLAG2_KR_PRO_DOWN; + if (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) { + msleep(100); + printk("wait to reinited ok..%x\n", adapter->flags2); + } +} + +int txgbe_bp_mode_setting(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + + /*default to open an73*/ + + adapter->backplane_an = AUTO ? 1 : 0; + adapter->an37 = AUTO ? 1 : 0; + + if (adapter->backplane_mode == TXGBE_BP_M_KR) { + hw->subsystem_device_id = TXGBE_ID_WX1820_KR_KX_KX4; + hw->subsystem_id = TXGBE_ID_WX1820_KR_KX_KX4; + } else if (adapter->backplane_mode == TXGBE_BP_M_KX4) { + hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_XAUI; + hw->subsystem_id = TXGBE_ID_WX1820_MAC_XAUI; + } else if (adapter->backplane_mode == TXGBE_BP_M_KX) { + hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_SGMII; + hw->subsystem_id = TXGBE_ID_WX1820_MAC_SGMII; + } else if (adapter->backplane_mode == TXGBE_BP_M_SFI) { + hw->subsystem_device_id = TXGBE_ID_WX1820_SFP; + hw->subsystem_id = TXGBE_ID_WX1820_SFP; + } + + if (adapter->backplane_auto == TXGBE_BP_M_AUTO) { + adapter->backplane_an = 1; + adapter->an37 = 1; + } else if (adapter->backplane_auto == TXGBE_BP_M_NAUTO) { + adapter->backplane_an = 0; + adapter->an37 = 0; + } + + if (adapter->ffe_set == TXGBE_BP_M_KR || + adapter->ffe_set == TXGBE_BP_M_KX4 || + adapter->ffe_set == TXGBE_BP_M_KX || + adapter->ffe_set == TXGBE_BP_M_SFI) { + goto out; + } + + if (KR_SET == 1) { + adapter->ffe_main = KR_MAIN; + adapter->ffe_pre = KR_PRE; + adapter->ffe_post = KR_POST; + } else if (KX4_SET == 1) { + adapter->ffe_main = KX4_MAIN; + adapter->ffe_pre = KX4_PRE; + adapter->ffe_post = KX4_POST; + } else if (KX_SET == 1) { + adapter->ffe_main = KX_MAIN; + adapter->ffe_pre = KX_PRE; + adapter->ffe_post = KX_POST; + } else if (SFI_SET == 1) { + adapter->ffe_main = SFI_MAIN; + adapter->ffe_pre = SFI_PRE; + adapter->ffe_post = SFI_POST; + } +out: + return 0; +} + +static int txgbe_kr_subtask(struct txgbe_adapter *adapter) +{ + Handle_bkp_an73_flow(0, adapter); + return 0; +} + +void txgbe_bp_watchdog_event(struct txgbe_adapter *adapter) +{ + u32 value = 0; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_POLLING == 1) { + value = txgbe_rd32_epcs(hw, 0x78002); + value = value & 0x4; + if (value == 0x4) { + e_dev_info("Enter training\n"); + txgbe_kr_subtask(adapter); + } + } else { + if (adapter->flags2 & TXGBE_FLAG2_KR_TRAINING) { + e_dev_info("Enter training\n"); + txgbe_kr_subtask(adapter); + adapter->flags2 &= ~TXGBE_FLAG2_KR_TRAINING; + } + } +} + +void txgbe_bp_down_event(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + if (adapter->backplane_an == 1) { + if (KR_NORESET == 1) { + txgbe_wr32_epcs(hw, 0x78003, 0x0000); + txgbe_wr32_epcs(hw, 0x70000, 0x0000); + txgbe_wr32_epcs(hw, 0x78001, 0x0000); + msleep(1050); + txgbe_set_link_to_kr(hw, 1); + } else if (KR_REINITED == 1) { + txgbe_wr32_epcs(hw, 0x78003, 0x0000); + txgbe_wr32_epcs(hw, 0x70000, 0x0000); + txgbe_wr32_epcs(hw, 0x78001, 0x0000); + txgbe_wr32_epcs(hw, 0x18035, 0x00FF); + txgbe_wr32_epcs(hw, 0x18055, 0x00FF); + msleep(1050); + txgbe_wr32_epcs(hw, 0x78003, 0x0001); + txgbe_wr32_epcs(hw, 0x70000, 0x3200); + txgbe_wr32_epcs(hw, 0x78001, 0x0007); + txgbe_wr32_epcs(hw, 0x18035, 0x00FC); + txgbe_wr32_epcs(hw, 0x18055, 0x00FC); + } else { + msleep(1000); + if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) { + adapter->flags2 |= TXGBE_FLAG2_KR_PRO_REINIT; + txgbe_reinit_locked(adapter); + adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_REINIT; + } + } + } +} + +int txgbe_kr_intr_handle(struct txgbe_adapter *adapter) +{ + bkpan73ability tBkpAn73Ability, tLpBkpAn73Ability; + tBkpAn73Ability.currentLinkMode = 0; + + if (KR_MODE) { + e_dev_info("HandleBkpAn73Flow() \n"); + e_dev_info("---------------------------------\n"); + } + + /*1. Get the local AN73 Base Page Ability*/ + if (KR_MODE) + e_dev_info("<1>. Get the local AN73 Base Page Ability ...\n"); + GetBkpAn73Ability(&tBkpAn73Ability, 0, adapter); + + /*2. Check the AN73 Interrupt Status*/ + if (KR_MODE) + e_dev_info("<2>. Check the AN73 Interrupt Status ...\n"); + /*3.Clear the AN_PG_RCV interrupt*/ + ClearBkpAn73Interrupt(2, 0x0, adapter); + + /*3.1. Get the link partner AN73 Base Page Ability*/ + if (KR_MODE) + e_dev_info("<3.1>. Get the link partner AN73 Base Page Ability ...\n"); + Get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter); + + /*3.2. Check the AN73 Link Ability with Link Partner*/ + if (KR_MODE) { + e_dev_info("<3.2>. Check the AN73 Link Ability with Link Partner ...\n"); + e_dev_info(" Local Link Ability: 0x%x\n", tBkpAn73Ability.linkAbility); + e_dev_info(" Link Partner Link Ability: 0x%x\n", tLpBkpAn73Ability.linkAbility); + } + Check_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter); + + return 0; +} + +/*Check Ethernet Backplane AN73 Base Page Ability +**return value: +** -1 : none link mode matched, exit +** 0 : current link mode matched, wait AN73 to be completed +** 1 : current link mode not matched, set to matched link mode, re-start AN73 external +*/ +int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability, + struct txgbe_adapter *adapter) +{ + unsigned int comLinkAbility; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_MODE) { + e_dev_info("CheckBkpAn73Ability():\n"); + e_dev_info("------------------------\n"); + } + + /*-- Check the common link ability and take action based on the result*/ + comLinkAbility = tBkpAn73Ability.linkAbility & tLpBkpAn73Ability.linkAbility; + if (KR_MODE) + e_dev_info("comLinkAbility= 0x%x, linkAbility= 0x%x, lpLinkAbility= 0x%x\n", + comLinkAbility, tBkpAn73Ability.linkAbility, tLpBkpAn73Ability.linkAbility); + + if (comLinkAbility == 0) { + if (KR_MODE) + e_dev_info("WARNING: The Link Partner does not support any compatible speed mode!!!\n\n"); + return -1; + } else if (comLinkAbility & 0x80) { + if (tBkpAn73Ability.currentLinkMode == 0) { + if (KR_MODE) + e_dev_info("Link mode is matched with Link Partner: [LINK_KR].\n"); + return 0; + } else { + if (KR_MODE) { + e_dev_info("Link mode is not matched with Link Partner: [LINK_KR].\n"); + e_dev_info("Set the local link mode to [LINK_KR] ...\n"); + } + txgbe_set_link_to_kr(hw, 1); + return 1; + } + } else if (comLinkAbility & 0x40) { + if (tBkpAn73Ability.currentLinkMode == 0x10) { + if (KR_MODE) + e_dev_info("Link mode is matched with Link Partner: [LINK_KX4].\n"); + return 0; + } else { + if (KR_MODE) { + e_dev_info("Link mode is not matched with Link Partner: [LINK_KX4].\n"); + e_dev_info("Set the local link mode to [LINK_KX4] ...\n"); + } + txgbe_set_link_to_kx4(hw, 1); + return 1; + } + } else if (comLinkAbility & 0x20) { + if (tBkpAn73Ability.currentLinkMode == 0x1) { + if (KR_MODE) + e_dev_info("Link mode is matched with Link Partner: [LINK_KX].\n"); + return 0; + } else { + if (KR_MODE) { + e_dev_info("Link mode is not matched with Link Partner: [LINK_KX].\n"); + e_dev_info("Set the local link mode to [LINK_KX] ...\n"); + } + txgbe_set_link_to_kx(hw, 1, 1); + return 1; + } + } + return 0; +} + + +/*Get Ethernet Backplane AN73 Base Page Ability +**byLinkPartner: +**- 1: Get Link Partner Base Page +**- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment) +**- 0: Get Local Device Base Page +*/ +int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, + struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int rdata; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_MODE) { + e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner); + e_dev_info("----------------------------------------\n"); + } + + if (byLinkPartner == 1) { /*Link Partner Base Page*/ + /*Read the link partner AN73 Base Page Ability Registers*/ + if (KR_MODE) + e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70013); + if (KR_MODE) + e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70014); + if (KR_MODE) + e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata); + ptBkpAn73Ability->linkAbility = rdata & 0xE0; + if (KR_MODE) { + e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); + e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); + } + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70015); + if (KR_MODE) { + e_dev_info("SR AN MMD LP Base Page Ability Register 3: 0x%x\n", rdata); + e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); + e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); + } + ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + } else if (byLinkPartner == 2) {/*Link Partner Next Page*/ + /*Read the link partner AN73 Next Page Ability Registers*/ + if (KR_MODE) + e_dev_info("\nRead the link partner AN73 Next Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70019); + if (KR_MODE) + e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + } else { + /*Read the local AN73 Base Page Ability Registers*/ + if (KR_MODE) + e_dev_info("\nRead the local AN73 Base Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70010); + if (KR_MODE) + e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70011); + if (KR_MODE) + e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata); + ptBkpAn73Ability->linkAbility = rdata & 0xE0; + if (KR_MODE) { + e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); + e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); + } + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70012); + if (KR_MODE) { + e_dev_info("SR AN MMD Advertisement Register 3: 0x%x\n", rdata); + e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); + e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); + } + ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + } /*if (byLinkPartner == 1) Link Partner Base Page*/ + + if (KR_MODE) + e_dev_info("GetBkpAn73Ability() done.\n"); + + return status; +} + + +/*Get Ethernet Backplane AN73 Base Page Ability +**byLinkPartner: +**- 1: Get Link Partner Base Page +**- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment) +**- 0: Get Local Device Base Page +*/ +int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner, + struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int rdata; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_MODE) { + e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner); + e_dev_info("----------------------------------------\n"); + } + + if (byLinkPartner == 1) { //Link Partner Base Page + //Read the link partner AN73 Base Page Ability Registers + if (KR_MODE) + e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70013); + if (KR_MODE) + e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70014); + if (KR_MODE) + e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata); + ptBkpAn73Ability->linkAbility = rdata & 0xE0; + if (KR_MODE) { + e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); + e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); + } + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70015); + printk("SR AN MMD LP Base Page Ability Register 3: 0x%x\n", rdata); + printk(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); + printk(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); + ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + } else if (byLinkPartner == 2) { //Link Partner Next Page + //Read the link partner AN73 Next Page Ability Registers + if (KR_MODE) + e_dev_info("Read the link partner AN73 Next Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70019); + if (KR_MODE) + e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + } else { + //Read the local AN73 Base Page Ability Registers + if (KR_MODE) + e_dev_info("Read the local AN73 Base Page Ability Registers...\n"); + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70010); + if (KR_MODE) + e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata); + ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01; + if (KR_MODE) + e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage); + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70011); + if (KR_MODE) + e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata); + ptBkpAn73Ability->linkAbility = rdata & 0xE0; + if (KR_MODE) { + e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility); + e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n"); + e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n"); + } + + rdata = 0; + rdata = txgbe_rd32_epcs(hw, 0x70012); + if (KR_MODE) { + e_dev_info("SR AN MMD Advertisement Register 3: 0x%x\n", rdata); + e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01)); + e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01)); + } + ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03; + } + + if (KR_MODE) + e_dev_info("GetBkpAn73Ability() done.\n"); + + return status; +} + +/* DESCRIPTION: Set the source data fields[bitHigh:bitLow] with setValue +** INPUTS: *pSrcData: Source data pointer +** bitHigh: High bit position of the fields +** bitLow : Low bit position of the fields +** setValue: Set value of the fields +** OUTPUTS: return the updated source data +*/ +static void SetFields( + unsigned int *pSrcData, + unsigned int bitHigh, + unsigned int bitLow, + unsigned int setValue) +{ + int i; + + if (bitHigh == bitLow) { + if (setValue == 0) { + *pSrcData &= ~(1 << bitLow); + } else { + *pSrcData |= (1 << bitLow); + } + } else { + for (i = bitLow; i <= bitHigh; i++) { + *pSrcData &= ~(1 << i); + } + *pSrcData |= (setValue << bitLow); + } +} + +/*Check Ethernet Backplane AN73 Interrupt status +**- return the value of select interrupt index +*/ +int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter) +{ + unsigned int rdata; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_MODE) { + e_dev_info("CheckBkpAn73Interrupt(): intIndex = %d\n", intIndex); + e_dev_info("----------------------------------------\n"); + } + + rdata = 0x0000; + rdata = txgbe_rd32_epcs(hw, 0x78002); + if (KR_MODE) { + e_dev_info("Read VR AN MMD Interrupt Register: 0x%x\n", rdata); + e_dev_info("Interrupt: 0- AN_INT_CMPLT, 1- AN_INC_LINK, 2- AN_PG_RCV\n\n"); + } + + return ((rdata >> intIndex) & 0x01); +} + +/*Clear Ethernet Backplane AN73 Interrupt status +**- intIndexHi =0, only intIndex bit will be cleared +**- intIndexHi !=0, the [intIndexHi, intIndex] range will be cleared +*/ +int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int rdata, wdata; + struct txgbe_hw *hw = &adapter->hw; + + if (KR_MODE) { + e_dev_info("ClearBkpAn73Interrupt(): intIndex = %d\n", intIndex); + e_dev_info("----------------------------------------\n"); + } + + rdata = 0x0000; + rdata = txgbe_rd32_epcs(hw, 0x78002); + if (KR_MODE) + e_dev_info("[Before clear] Read VR AN MMD Interrupt Register: 0x%x\n", rdata); + + wdata = rdata; + if (intIndexHi) { + SetFields(&wdata, intIndexHi, intIndex, 0); + } else { + SetFields(&wdata, intIndex, intIndex, 0); + } + txgbe_wr32_epcs(hw, 0x78002, wdata); + + rdata = 0x0000; + rdata = txgbe_rd32_epcs(hw, 0x78002); + if (KR_MODE) { + e_dev_info("[After clear] Read VR AN MMD Interrupt Register: 0x%x\n", rdata); + e_dev_info("\n"); + } + + return status; +} + +int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int timer = 0; + bkpan73ability tLpBkpAn73Ability; + + /*while(timer++ < BKPAN73_TIMEOUT)*/ + while (timer++ < 20) { + if (CheckBkpAn73Interrupt(2, adapter)) { + /*Clear the AN_PG_RCV interrupt*/ + ClearBkpAn73Interrupt(2, 0, adapter); + + /*Get the link partner AN73 Next Page Ability*/ + Get_bkp_an73_ability(&tLpBkpAn73Ability, 2, adapter); + + /*Return when AN_LP_XNP_NP == 0, (bit[15]: Next Page)*/ + if (tLpBkpAn73Ability.nextPage == 0) { + return status; + } + } + msleep(200); + } /*while(timer++ < BKPAN73_TIMEOUT)*/ + if (KR_MODE) + e_dev_info("ERROR: Wait all the AN73 next pages to be exchanged Timeout!!!\n"); + + return -1; +} + +int ReadPhyLaneTxEq(unsigned short lane, struct txgbe_adapter *adapter, int post_t, int mode) +{ + int status = 0; + unsigned int addr, rdata; + struct txgbe_hw *hw = &adapter->hw; + u32 pre; + u32 post; + u32 lmain; + + /*LANEN_DIG_ASIC_TX_ASIC_IN_1[11:6]: TX_MAIN_CURSOR*/ + rdata = 0; + addr = 0x100E | (lane << 8); + rdata = rd32_ephy(hw, addr); + if (KR_MODE) { + e_dev_info("PHY LANE%0d TX EQ Read Value:\n", lane); + e_dev_info(" TX_MAIN_CURSOR: %d\n", ((rdata >> 6) & 0x3F)); + } + + /*LANEN_DIG_ASIC_TX_ASIC_IN_2[5 :0]: TX_PRE_CURSOR*/ + /*LANEN_DIG_ASIC_TX_ASIC_IN_2[11:6]: TX_POST_CURSOR*/ + rdata = 0; + addr = 0x100F | (lane << 8); + rdata = rd32_ephy(hw, addr); + if (KR_MODE) { + e_dev_info(" TX_PRE_CURSOR : %d\n", (rdata & 0x3F)); + e_dev_info(" TX_POST_CURSOR: %d\n", ((rdata >> 6) & 0x3F)); + e_dev_info("**********************************************\n"); + } + + if (mode == 1) { + pre = (rdata & 0x3F); + post = ((rdata >> 6) & 0x3F); + if ((160 - pre -post) < 88) + lmain = 88; + else + lmain = 160 - pre - post; + if (post_t != 0) + post = post_t; + txgbe_wr32_epcs(hw, 0x1803b, post); + txgbe_wr32_epcs(hw, 0x1803a, pre | (lmain << 8)); + txgbe_wr32_epcs(hw, 0x18037, txgbe_rd32_epcs(hw, 0x18037) & 0xff7f); + } + if (KR_MODE) + e_dev_info("**********************************************\n"); + + return status; +} + + +/*Enable Clause 72 KR training +** +**Note: +**<1>. The Clause 72 start-up protocol should be initiated when all pages are exchanged during Clause 73 auto- +**negotiation and when the auto-negotiation process is waiting for link status to be UP for 500 ms after +**exchanging all the pages. +** +**<2>. The local device and link partner should be enabled the CL72 KR training +**with in 500ms +** +**enable: +**- bits[1:0] =2'b11: Enable the CL72 KR training +**- bits[1:0] =2'b01: Disable the CL72 KR training +*/ +int EnableCl72KrTr(unsigned int enable, struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int wdata = 0; + struct txgbe_hw *hw = &adapter->hw; + + if (enable == 1) { + if (KR_MODE) + e_dev_info("\nDisable Clause 72 KR Training ...\n"); + status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + } else if (enable == 4) { + status |= ReadPhyLaneTxEq(0, adapter, 20, 1); + } else if (enable == 8) { + status |= ReadPhyLaneTxEq(0, adapter, 16, 1); + } else if (enable == 12) { + status |= ReadPhyLaneTxEq(0, adapter, 24, 1); + } else if (enable == 5) { + status |= ReadPhyLaneTxEq(0, adapter, 0, 1); + } else if (enable == 3) { + if (KR_MODE) + e_dev_info("\nEnable Clause 72 KR Training ...\n"); + + if (CL72_KRTR_PRBS_MODE_EN != 0xffff) { + /*Set PRBS Timer Duration Control to maximum 6.7ms in VR_PMA_KRTR_PRBS_CTRL1 Register*/ + wdata = CL72_KRTR_PRBS_MODE_EN; + txgbe_wr32_epcs(hw, 0x18005, wdata); + /*Set PRBS Timer Duration Control to maximum 6.7ms in VR_PMA_KRTR_PRBS_CTRL1 Register*/ + wdata = 0xFFFF; + txgbe_wr32_epcs(hw, 0x18004, wdata); + + /*Enable PRBS Mode to determine KR Training Status by setting Bit 0 of VR_PMA_KRTR_PRBS_CTRL0 Register*/ + wdata = 0; + SetFields(&wdata, 0, 0, 1); + } + +#ifdef CL72_KRTR_PRBS31_EN + /*Enable PRBS31 as the KR Training Pattern by setting Bit 1 of VR_PMA_KRTR_PRBS_CTRL0 Register*/ + SetFields(&wdata, 1, 1, 1); +#endif /*#ifdef CL72_KRTR_PRBS31_EN*/ + txgbe_wr32_epcs(hw, 0x18003, wdata); + status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + } else { + if (KR_MODE) + e_dev_info("\nInvalid setting for Clause 72 KR Training!!!\n"); + return -1; + } + + /*Enable the Clause 72 start-up protocol by setting Bit 1 of SR_PMA_KR_PMD_CTRL Register. + **Restart the Clause 72 start-up protocol by setting Bit 0 of SR_PMA_KR_PMD_CTRL Register*/ + wdata = enable; + txgbe_wr32_epcs(hw, 0x10096, wdata); + return status; +} + +int CheckCl72KrTrStatus(struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int addr, rdata, rdata1; + unsigned int timer = 0, times = 0; + struct txgbe_hw *hw = &adapter->hw; + + times = KR_POLLING ? 35 : 20; + + /*While loop to check clause 72 KR training status*/ + while (timer++ < times) { + //Get the latest received coefficient update or status + rdata = 0; + addr = 0x010098; + rdata = txgbe_rd32_epcs(hw, addr); + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Update Register: 0x%x\n", rdata); + + rdata = 0; + addr = 0x010099; + rdata = txgbe_rd32_epcs(hw, addr); + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata); + + rdata = 0; + addr = 0x01009a; + rdata = txgbe_rd32_epcs(hw, addr); + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LD Coefficient Update: 0x%x\n", rdata); + + rdata = 0; + addr = 0x01009b; + rdata = txgbe_rd32_epcs(hw, addr); + if (KR_MODE) + e_dev_info(" SR PMA MMD 10GBASE-KR LD Coefficient Status: 0x%x\n", rdata); + + rdata = 0; + addr = 0x010097; + rdata = txgbe_rd32_epcs(hw, addr); + if (KR_MODE) { + e_dev_info("SR PMA MMD 10GBASE-KR Status Register: 0x%x\n", rdata); + e_dev_info(" Training Failure (bit3): %d\n", ((rdata >> 3) & 0x01)); + e_dev_info(" Start-Up Protocol Status (bit2): %d\n", ((rdata >> 2) & 0x01)); + e_dev_info(" Frame Lock (bit1): %d\n", ((rdata >> 1) & 0x01)); + e_dev_info(" Receiver Status (bit0): %d\n", ((rdata >> 0) & 0x01)); + } + + rdata1 = txgbe_rd32_epcs(hw, 0x10099) & 0x8000; + if (rdata1 == 0x8000) { + adapter->flags2 |= KR; + if (KR_MODE) + e_dev_info("TEST Coefficient Status Register: 0x%x\n", rdata); + } + /*If bit3 is set, Training is completed with failure*/ + if ((rdata >> 3) & 0x01) { + if (KR_MODE) + e_dev_info("Training is completed with failure!!!\n"); + status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + return status; + } + + /*If bit0 is set, Receiver trained and ready to receive data*/ + if ((rdata >> 0) & 0x01) { + if (KR_MODE) + e_dev_info("Receiver trained and ready to receive data ^_^\n"); + status |= ReadPhyLaneTxEq(0, adapter, 0, 0); + return status; + } + + msleep(20); + } + + if (KR_MODE) + e_dev_info("ERROR: Check Clause 72 KR Training Complete Timeout!!!\n"); + + return status; +} + +int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter) +{ + int status = 0; + unsigned int timer = 0; + unsigned int addr, data; + bkpan73ability tBkpAn73Ability, tLpBkpAn73Ability; + u32 i = 0; + u32 rdata = 0; + u32 rdata1 = 0; + struct txgbe_hw *hw = &adapter->hw; + tBkpAn73Ability.currentLinkMode = byLinkMode; + + if (KR_MODE) { + e_dev_info("HandleBkpAn73Flow() \n"); + e_dev_info("---------------------------------\n"); + } + + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + txgbe_wr32_epcs(hw, 0x78003, 0x0); + + /*Check the FEC and KR Training for KR mode*/ + if (1) { + //FEC handling + if (KR_MODE) + e_dev_info("<3.3>. Check the FEC for KR mode ...\n"); + tBkpAn73Ability.fecAbility = 0x03; + tLpBkpAn73Ability.fecAbility = 0x0; + if ((tBkpAn73Ability.fecAbility & tLpBkpAn73Ability.fecAbility) == 0x03) { + if (KR_MODE) + e_dev_info("Enable the Backplane KR FEC ...\n"); + //Write 1 to SR_PMA_KR_FEC_CTRL bit0 to enable the FEC + data = 1; + addr = 0x100ab; //SR_PMA_KR_FEC_CTRL + txgbe_wr32_epcs(hw, addr, data); + } else { + if (KR_MODE) + e_dev_info("Backplane KR FEC is disabled.\n"); + } +#ifdef CL72_KR_TRAINING_ON + for (i = 0; i < 2; i++) { + if (KR_MODE) { + e_dev_info("\n<3.4>. Check the CL72 KR Training for KR mode ...\n"); + printk("===================%d=======================\n", i); + } + + status |= EnableCl72KrTr(3, adapter); + + if (KR_MODE) + e_dev_info("\nCheck the Clause 72 KR Training status ...\n"); + status |= CheckCl72KrTrStatus(adapter); + + rdata = txgbe_rd32_epcs(hw, 0x10099) & 0x8000; + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata); + rdata1 = txgbe_rd32_epcs(hw, 0x1009b) & 0x8000; + if (KR_MODE) + e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata1); + if (KR_POLLING == 0) { + if (adapter->flags2 & KR) { + rdata = 0x8000; + adapter->flags2 &= ~KR; + } + } + if ((rdata == 0x8000) & (rdata1 == 0x8000)) { + if (KR_MODE) + e_dev_info("====================out===========================\n"); + status |= EnableCl72KrTr(1, adapter); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); + ClearBkpAn73Interrupt(2, 0, adapter); + ClearBkpAn73Interrupt(1, 0, adapter); + ClearBkpAn73Interrupt(0, 0, adapter); + while (timer++ < 10) { + rdata = txgbe_rd32_epcs(hw, 0x30020); + rdata = rdata & 0x1000; + if (rdata == 0x1000) { + if (KR_MODE) + e_dev_info("\nINT_AN_INT_CMPLT =1, AN73 Done Success.\n"); + e_dev_info("AN73 Done Success.\n"); + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); + return 0; + } + msleep(10); + } + msleep(1000); + txgbe_set_link_to_kr(hw, 1); + + return 0; + } + + status |= EnableCl72KrTr(1, adapter); + } +#endif + } + ClearBkpAn73Interrupt(0, 0, adapter); + ClearBkpAn73Interrupt(1, 0, adapter); + ClearBkpAn73Interrupt(2, 0, adapter); + + return status; +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h new file mode 100644 index 000000000000..c5f0dc507216 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h @@ -0,0 +1,41 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + + +#ifndef _TXGBE_BP_H_ +#define _TXGBE_BP_H_ + +#include "txgbe.h" +#include "txgbe_hw.h" + +#define CL72_KR_TRAINING_ON + +/* Backplane AN73 Base Page Ability struct*/ +typedef struct TBKPAN73ABILITY { + unsigned int nextPage; //Next Page (bit0) + unsigned int linkAbility; //Link Ability (bit[7:0]) + unsigned int fecAbility; //FEC Request (bit1), FEC Enable (bit0) + unsigned int currentLinkMode; //current link mode for local device +} bkpan73ability; + +int txgbe_kr_intr_handle(struct txgbe_adapter *adapter); +void txgbe_bp_down_event(struct txgbe_adapter *adapter); +void txgbe_bp_watchdog_event(struct txgbe_adapter *adapter); +int txgbe_bp_mode_setting(struct txgbe_adapter *adapter); +void txgbe_bp_close_protect(struct txgbe_adapter *adapter); + +#endif diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h b/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h new file mode 100644 index 000000000000..495460e1db8c --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h @@ -0,0 +1,30 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_dcb.h, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#ifndef _TXGBE_DCB_H_ +#define _TXGBE_DCB_H_ + +#include "txgbe_type.h" + +#endif /* _TXGBE_DCB_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c new file mode 100644 index 000000000000..5cb8ef61e04b --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c @@ -0,0 +1,3381 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_ethtool.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + +/* ethtool support for txgbe */ + +#include <linux/types.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/netdevice.h> +#include <linux/ethtool.h> +#include <linux/vmalloc.h> +#include <linux/highmem.h> +#include <linux/firmware.h> +#include <linux/net_tstamp.h> +#include <asm/uaccess.h> + +#include "txgbe.h" +#include "txgbe_hw.h" +#include "txgbe_phy.h" + +#define TXGBE_ALL_RAR_ENTRIES 16 + +struct txgbe_stats { + char stat_string[ETH_GSTRING_LEN]; + int sizeof_stat; + int stat_offset; +}; + +#define TXGBE_NETDEV_STAT(_net_stat) { \ + .stat_string = #_net_stat, \ + .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \ + .stat_offset = offsetof(struct net_device_stats, _net_stat) \ +} +static const struct txgbe_stats txgbe_gstrings_net_stats[] = { + TXGBE_NETDEV_STAT(rx_packets), + TXGBE_NETDEV_STAT(tx_packets), + TXGBE_NETDEV_STAT(rx_bytes), + TXGBE_NETDEV_STAT(tx_bytes), + TXGBE_NETDEV_STAT(rx_errors), + TXGBE_NETDEV_STAT(tx_errors), + TXGBE_NETDEV_STAT(rx_dropped), + TXGBE_NETDEV_STAT(tx_dropped), + TXGBE_NETDEV_STAT(multicast), + TXGBE_NETDEV_STAT(collisions), + TXGBE_NETDEV_STAT(rx_over_errors), + TXGBE_NETDEV_STAT(rx_crc_errors), + TXGBE_NETDEV_STAT(rx_frame_errors), + TXGBE_NETDEV_STAT(rx_fifo_errors), + TXGBE_NETDEV_STAT(rx_missed_errors), + TXGBE_NETDEV_STAT(tx_aborted_errors), + TXGBE_NETDEV_STAT(tx_carrier_errors), + TXGBE_NETDEV_STAT(tx_fifo_errors), + TXGBE_NETDEV_STAT(tx_heartbeat_errors), +}; + +#define TXGBE_STAT(_name, _stat) { \ + .stat_string = _name, \ + .sizeof_stat = FIELD_SIZEOF(struct txgbe_adapter, _stat), \ + .stat_offset = offsetof(struct txgbe_adapter, _stat) \ +} +static struct txgbe_stats txgbe_gstrings_stats[] = { + TXGBE_STAT("rx_pkts_nic", stats.gprc), + TXGBE_STAT("tx_pkts_nic", stats.gptc), + TXGBE_STAT("rx_bytes_nic", stats.gorc), + TXGBE_STAT("tx_bytes_nic", stats.gotc), + TXGBE_STAT("lsc_int", lsc_int), + TXGBE_STAT("tx_busy", tx_busy), + TXGBE_STAT("non_eop_descs", non_eop_descs), + TXGBE_STAT("broadcast", stats.bprc), + TXGBE_STAT("rx_no_buffer_count", stats.rnbc[0]), + TXGBE_STAT("tx_timeout_count", tx_timeout_count), + TXGBE_STAT("tx_restart_queue", restart_queue), + TXGBE_STAT("rx_long_length_count", stats.roc), + TXGBE_STAT("rx_short_length_count", stats.ruc), + TXGBE_STAT("tx_flow_control_xon", stats.lxontxc), + TXGBE_STAT("rx_flow_control_xon", stats.lxonrxc), + TXGBE_STAT("tx_flow_control_xoff", stats.lxofftxc), + TXGBE_STAT("rx_flow_control_xoff", stats.lxoffrxc), + TXGBE_STAT("rx_csum_offload_good_count", hw_csum_rx_good), + TXGBE_STAT("rx_csum_offload_errors", hw_csum_rx_error), + TXGBE_STAT("alloc_rx_page_failed", alloc_rx_page_failed), + TXGBE_STAT("alloc_rx_buff_failed", alloc_rx_buff_failed), + TXGBE_STAT("rx_no_dma_resources", hw_rx_no_dma_resources), + TXGBE_STAT("hw_rsc_aggregated", rsc_total_count), + TXGBE_STAT("hw_rsc_flushed", rsc_total_flush), + TXGBE_STAT("fdir_match", stats.fdirmatch), + TXGBE_STAT("fdir_miss", stats.fdirmiss), + TXGBE_STAT("fdir_overflow", fdir_overflow), + TXGBE_STAT("os2bmc_rx_by_bmc", stats.o2bgptc), + TXGBE_STAT("os2bmc_tx_by_bmc", stats.b2ospc), + TXGBE_STAT("os2bmc_tx_by_host", stats.o2bspc), + TXGBE_STAT("os2bmc_rx_by_host", stats.b2ogprc), + TXGBE_STAT("tx_hwtstamp_timeouts", tx_hwtstamp_timeouts), + TXGBE_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared), +}; + +/* txgbe allocates num_tx_queues and num_rx_queues symmetrically so + * we set the num_rx_queues to evaluate to num_tx_queues. This is + * used because we do not have a good way to get the max number of + * rx queues with CONFIG_RPS disabled. + */ +#define TXGBE_NUM_RX_QUEUES netdev->num_tx_queues +#define TXGBE_NUM_TX_QUEUES netdev->num_tx_queues + +#define TXGBE_QUEUE_STATS_LEN ( \ + (TXGBE_NUM_TX_QUEUES + TXGBE_NUM_RX_QUEUES) * \ + (sizeof(struct txgbe_queue_stats) / sizeof(u64))) +#define TXGBE_GLOBAL_STATS_LEN ARRAY_SIZE(txgbe_gstrings_stats) +#define TXGBE_NETDEV_STATS_LEN ARRAY_SIZE(txgbe_gstrings_net_stats) +#define TXGBE_PB_STATS_LEN ( \ + (sizeof(((struct txgbe_adapter *)0)->stats.pxonrxc) + \ + sizeof(((struct txgbe_adapter *)0)->stats.pxontxc) + \ + sizeof(((struct txgbe_adapter *)0)->stats.pxoffrxc) + \ + sizeof(((struct txgbe_adapter *)0)->stats.pxofftxc)) \ + / sizeof(u64)) +#define TXGBE_VF_STATS_LEN \ + ((((struct txgbe_adapter *)netdev_priv(netdev))->num_vfs) * \ + (sizeof(struct vf_stats) / sizeof(u64))) +#define TXGBE_STATS_LEN (TXGBE_GLOBAL_STATS_LEN + \ + TXGBE_NETDEV_STATS_LEN + \ + TXGBE_PB_STATS_LEN + \ + TXGBE_QUEUE_STATS_LEN + \ + TXGBE_VF_STATS_LEN) + +static const char txgbe_gstrings_test[][ETH_GSTRING_LEN] = { + "Register test (offline)", "Eeprom test (offline)", + "Interrupt test (offline)", "Loopback test (offline)", + "Link test (on/offline)" +}; +#define TXGBE_TEST_LEN (sizeof(txgbe_gstrings_test) / ETH_GSTRING_LEN) + +/* currently supported speeds for 10G */ +#define ADVERTISED_MASK_10G (SUPPORTED_10000baseT_Full | \ + SUPPORTED_10000baseKX4_Full | \ + SUPPORTED_10000baseKR_Full) + +#define txgbe_isbackplane(type) \ + ((type == txgbe_media_type_backplane) ? true : false) + +static __u32 txgbe_backplane_type(struct txgbe_hw *hw) +{ + __u32 mode = 0x00; + switch (hw->phy.link_mode) { + case TXGBE_PHYSICAL_LAYER_10GBASE_KX4: + mode = SUPPORTED_10000baseKX4_Full; + break; + case TXGBE_PHYSICAL_LAYER_10GBASE_KR: + mode = SUPPORTED_10000baseKR_Full; + break; + case TXGBE_PHYSICAL_LAYER_1000BASE_KX: + mode = SUPPORTED_1000baseKX_Full; + break; + default: + mode = (SUPPORTED_10000baseKX4_Full | + SUPPORTED_10000baseKR_Full | + SUPPORTED_1000baseKX_Full); + break; + } + return mode; +} + +int txgbe_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *cmd) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u32 supported_link; + u32 link_speed = 0; + bool autoneg = false; + u32 supported, advertising; + bool link_up; + + ethtool_convert_link_mode_to_legacy_u32(&supported, + cmd->link_modes.supported); + + TCALL(hw, mac.ops.get_link_capabilities, &supported_link, &autoneg); + + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) + autoneg = adapter->backplane_an ? 1:0; + else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) + autoneg = adapter->an37?1:0; + + /* set the supported link speeds */ + if (supported_link & TXGBE_LINK_SPEED_10GB_FULL) + supported |= (txgbe_isbackplane(hw->phy.media_type)) ? + txgbe_backplane_type(hw) : SUPPORTED_10000baseT_Full; + if (supported_link & TXGBE_LINK_SPEED_1GB_FULL) + supported |= (txgbe_isbackplane(hw->phy.media_type)) ? + SUPPORTED_1000baseKX_Full : SUPPORTED_1000baseT_Full; + if (supported_link & TXGBE_LINK_SPEED_100_FULL) + supported |= SUPPORTED_100baseT_Full; + if (supported_link & TXGBE_LINK_SPEED_10_FULL) + supported |= SUPPORTED_10baseT_Full; + + /* default advertised speed if phy.autoneg_advertised isn't set */ + advertising = supported; + + /* set the advertised speeds */ + if (hw->phy.autoneg_advertised) { + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) + advertising |= ADVERTISED_100baseT_Full; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) + advertising |= (supported & ADVERTISED_MASK_10G); + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) { + if (supported & SUPPORTED_1000baseKX_Full) + advertising |= ADVERTISED_1000baseKX_Full; + else + advertising |= ADVERTISED_1000baseT_Full; + } + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) + advertising |= ADVERTISED_10baseT_Full; + } else { + /* default modes in case phy.autoneg_advertised isn't set */ + if (supported_link & TXGBE_LINK_SPEED_10GB_FULL) + advertising |= ADVERTISED_10000baseT_Full; + if (supported_link & TXGBE_LINK_SPEED_1GB_FULL) + advertising |= ADVERTISED_1000baseT_Full; + if (supported_link & TXGBE_LINK_SPEED_100_FULL) + advertising |= ADVERTISED_100baseT_Full; + if (hw->phy.multispeed_fiber && !autoneg) { + if (supported_link & TXGBE_LINK_SPEED_10GB_FULL) + advertising = ADVERTISED_10000baseT_Full; + } + if (supported_link & TXGBE_LINK_SPEED_10_FULL) + advertising |= ADVERTISED_10baseT_Full; + } + + if (autoneg) { + supported |= SUPPORTED_Autoneg; + advertising |= ADVERTISED_Autoneg; + cmd->base.autoneg = AUTONEG_ENABLE; + } else + cmd->base.autoneg = AUTONEG_DISABLE; + + /* Determine the remaining settings based on the PHY type. */ + switch (adapter->hw.phy.type) { + case txgbe_phy_tn: + case txgbe_phy_aq: + case txgbe_phy_cu_unknown: + supported |= SUPPORTED_TP; + advertising |= ADVERTISED_TP; + cmd->base.port = PORT_TP; + break; + case txgbe_phy_qt: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_FIBRE; + break; + case txgbe_phy_nl: + case txgbe_phy_sfp_passive_tyco: + case txgbe_phy_sfp_passive_unknown: + case txgbe_phy_sfp_ftl: + case txgbe_phy_sfp_avago: + case txgbe_phy_sfp_intel: + case txgbe_phy_sfp_unknown: + switch (adapter->hw.phy.sfp_type) { + /* SFP+ devices, further checking needed */ + case txgbe_sfp_type_da_cu: + case txgbe_sfp_type_da_cu_core0: + case txgbe_sfp_type_da_cu_core1: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_DA; + break; + case txgbe_sfp_type_sr: + case txgbe_sfp_type_lr: + case txgbe_sfp_type_srlr_core0: + case txgbe_sfp_type_srlr_core1: + case txgbe_sfp_type_1g_sx_core0: + case txgbe_sfp_type_1g_sx_core1: + case txgbe_sfp_type_1g_lx_core0: + case txgbe_sfp_type_1g_lx_core1: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_FIBRE; + break; + case txgbe_sfp_type_not_present: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_NONE; + break; + case txgbe_sfp_type_1g_cu_core0: + case txgbe_sfp_type_1g_cu_core1: + supported |= SUPPORTED_TP; + advertising |= ADVERTISED_TP; + cmd->base.port = PORT_TP; + break; + case txgbe_sfp_type_unknown: + default: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_OTHER; + break; + } + break; + case txgbe_phy_xaui: + supported |= SUPPORTED_TP; + advertising |= ADVERTISED_TP; + cmd->base.port = PORT_TP; + break; + case txgbe_phy_unknown: + case txgbe_phy_generic: + case txgbe_phy_sfp_unsupported: + default: + supported |= SUPPORTED_FIBRE; + advertising |= ADVERTISED_FIBRE; + cmd->base.port = PORT_OTHER; + break; + } + + if (!in_interrupt()) { + TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); + } else { + /* + * this case is a special workaround for RHEL5 bonding + * that calls this routine from interrupt context + */ + link_speed = adapter->link_speed; + link_up = adapter->link_up; + } + + supported |= SUPPORTED_Pause; + + switch (hw->fc.requested_mode) { + case txgbe_fc_full: + advertising |= ADVERTISED_Pause; + break; + case txgbe_fc_rx_pause: + advertising |= ADVERTISED_Pause | + ADVERTISED_Asym_Pause; + break; + case txgbe_fc_tx_pause: + advertising |= ADVERTISED_Asym_Pause; + break; + default: + advertising &= ~(ADVERTISED_Pause | + ADVERTISED_Asym_Pause); + } + + if (link_up) { + switch (link_speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + cmd->base.speed = SPEED_10000; + break; + case TXGBE_LINK_SPEED_1GB_FULL: + cmd->base.speed = SPEED_1000; + break; + case TXGBE_LINK_SPEED_100_FULL: + cmd->base.speed = SPEED_100; + break; + case TXGBE_LINK_SPEED_10_FULL: + cmd->base.speed = SPEED_10; + break; + default: + break; + } + cmd->base.duplex = DUPLEX_FULL; + } else { + cmd->base.speed = -1; + cmd->base.duplex = -1; + } + + ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, + supported); + ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising, + advertising); + return 0; +} + +static int txgbe_set_link_ksettings(struct net_device *netdev, + const struct ethtool_link_ksettings *cmd) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u32 advertised, old; + s32 err = 0; + u32 supported, advertising; + ethtool_convert_link_mode_to_legacy_u32(&supported, + cmd->link_modes.supported); + ethtool_convert_link_mode_to_legacy_u32(&advertising, + cmd->link_modes.advertising); + + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) { + adapter->backplane_an = cmd->base.autoneg ? 1 : 0; + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + adapter->an37 = cmd->base.autoneg ? 1 : 0; + } + + if ((hw->phy.media_type == txgbe_media_type_copper) || + (hw->phy.multispeed_fiber)) { + /* + * this function does not support duplex forcing, but can + * limit the advertising of the adapter to the specified speed + */ + if (advertising & ~supported) + return -EINVAL; + + /* only allow one speed at a time if no autoneg */ + if (!cmd->base.autoneg && hw->phy.multispeed_fiber) { + if (advertising == + (ADVERTISED_10000baseT_Full | + ADVERTISED_1000baseT_Full)) + return -EINVAL; + } + old = hw->phy.autoneg_advertised; + advertised = 0; + if (advertising & ADVERTISED_10000baseT_Full) + advertised |= TXGBE_LINK_SPEED_10GB_FULL; + + if (advertising & ADVERTISED_1000baseT_Full) + advertised |= TXGBE_LINK_SPEED_1GB_FULL; + + if (advertising & ADVERTISED_100baseT_Full) + advertised |= TXGBE_LINK_SPEED_100_FULL; + + if (advertising & ADVERTISED_10baseT_Full) + advertised |= TXGBE_LINK_SPEED_10_FULL; + + if (old == advertised) + return err; + /* this sets the link speed and restarts auto-neg */ + while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state)) + usleep_range(1000, 2000); + + hw->mac.autotry_restart = true; + err = TCALL(hw, mac.ops.setup_link, advertised, true); + if (err) { + e_info(probe, "setup link failed with code %d\n", err); + TCALL(hw, mac.ops.setup_link, old, true); + } + if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) + TCALL(hw, mac.ops.flap_tx_laser); + clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state); + } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4 || + (hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + if (!cmd->base.autoneg) { + if (advertising == + (ADVERTISED_10000baseKR_Full | + ADVERTISED_1000baseKX_Full | + ADVERTISED_10000baseKX4_Full)) + return -EINVAL; + } else { + err = txgbe_set_link_to_kr(hw, 1); + return err; + } + advertised = 0; + if (advertising & ADVERTISED_10000baseKR_Full) { + err = txgbe_set_link_to_kr(hw, 1); + advertised |= TXGBE_LINK_SPEED_10GB_FULL; + return err; + } else if (advertising & ADVERTISED_10000baseKX4_Full) { + err = txgbe_set_link_to_kx4(hw, 1); + advertised |= TXGBE_LINK_SPEED_10GB_FULL; + return err; + } else if (advertising & ADVERTISED_1000baseKX_Full) { + advertised |= TXGBE_LINK_SPEED_1GB_FULL; + err = txgbe_set_link_to_kx(hw, TXGBE_LINK_SPEED_1GB_FULL, 0); + return err; + } + return err; + } else { + /* in this case we currently only support 10Gb/FULL */ + u32 speed = cmd->base.speed; + if ((cmd->base.autoneg == AUTONEG_ENABLE) || + (advertising != ADVERTISED_10000baseT_Full) || + (speed + cmd->base.duplex != SPEED_10000 + DUPLEX_FULL)) + return -EINVAL; + } + + return err; +} + +static void txgbe_get_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pause) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + + if (txgbe_device_supports_autoneg_fc(hw) && + !hw->fc.disable_fc_autoneg) + pause->autoneg = 1; + else + pause->autoneg = 0; + + if (hw->fc.current_mode == txgbe_fc_rx_pause) { + pause->rx_pause = 1; + } else if (hw->fc.current_mode == txgbe_fc_tx_pause) { + pause->tx_pause = 1; + } else if (hw->fc.current_mode == txgbe_fc_full) { + pause->rx_pause = 1; + pause->tx_pause = 1; + } +} + +static int txgbe_set_pauseparam(struct net_device *netdev, + struct ethtool_pauseparam *pause) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_fc_info fc = hw->fc; + + /* some devices do not support autoneg of flow control */ + if ((pause->autoneg == AUTONEG_ENABLE) && + !txgbe_device_supports_autoneg_fc(hw)) + return -EINVAL; + + fc.disable_fc_autoneg = (pause->autoneg != AUTONEG_ENABLE); + + if ((pause->rx_pause && pause->tx_pause) || pause->autoneg) + fc.requested_mode = txgbe_fc_full; + else if (pause->rx_pause) + fc.requested_mode = txgbe_fc_rx_pause; + else if (pause->tx_pause) + fc.requested_mode = txgbe_fc_tx_pause; + else + fc.requested_mode = txgbe_fc_none; + + /* if the thing changed then we'll update and use new autoneg */ + if (memcmp(&fc, &hw->fc, sizeof(struct txgbe_fc_info))) { + hw->fc = fc; + if (netif_running(netdev)) + txgbe_reinit_locked(adapter); + else + txgbe_reset(adapter); + } + + return 0; +} + +static u32 txgbe_get_msglevel(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + return adapter->msg_enable; +} + +static void txgbe_set_msglevel(struct net_device *netdev, u32 data) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + adapter->msg_enable = data; +} + +#define TXGBE_REGS_LEN 4096 +static int txgbe_get_regs_len(struct net_device __always_unused *netdev) +{ + return TXGBE_REGS_LEN * sizeof(u32); +} + +#define TXGBE_GET_STAT(_A_, _R_) (_A_->stats._R_) + +static void txgbe_get_regs(struct net_device *netdev, struct ethtool_regs *regs, + void *p) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u32 *regs_buff = p; + u32 i; + u32 id = 0; + + memset(p, 0, TXGBE_REGS_LEN * sizeof(u32)); + regs_buff[TXGBE_REGS_LEN - 1] = 0x55555555; + + regs->version = hw->revision_id << 16 | + hw->device_id; + + /* Global Registers */ + /* chip control */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PWR);//0 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_CTL);//1 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PF_SM);//2 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST);//3 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_ST);//4 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_SWSM);//5 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST_ST);//6 + /* pvt sensor */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_CTL);//7 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_EN);//8 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ST);//9 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_THRE);//10 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_DALARM_THRE);//11 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_INT_EN);//12 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_ST);//13 + /* Fmgr Register */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMD);//14 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_DATA);//15 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_STATUS);//16 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_USR_CMD);//17 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG0);//18 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG1);//19 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_STATUS);//20 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_SWPTR);//21 + + /* Port Registers */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_CTL);//22 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_ST);//23 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_EX_VTYPE);//24 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN);//25 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN_GPE);//26 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_GENEVE);//27 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TEREDO);//28 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TCP_TIME);//29 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_LED_CTL);//30 + /* GPIO */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DR);//31 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DDR);//32 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_CTL);//33 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTEN);//34 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTMASK);//35 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTSTATUS);//36 + /* I2C */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CON);//37 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TAR);//38 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DATA_CMD);//39 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_HCNT);//40 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_LCNT);//41 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_HCNT);//42 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_LCNT);//43 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SCL_HCNT);//44 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_STAT);//45 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_MASK);//46 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RAW_INTR_STAT);//47 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RX_TL);//48 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_TL);//49 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_INTR);//50 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_UNDER);//51 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_OVER);//52 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_OVER);//53 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RD_REQ);//54 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_ABRT);//55 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_DONE);//56 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_ACTIVITY);//57 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_STOP_DET);//58 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_START_DET);//59 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_GEN_CALL);//60 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE);//61 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_STATUS);//62 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TXFLR);//63 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RXFLR);//64 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_HOLD);//65 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_ABRT_SOURCE);//66 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_SETUP);//67 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE_STATUS);//68 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SPKLEN);//69 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SPKLEN);//70 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT);//71 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT);//72 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_SCL_STUCK_DET);//73 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DEVICE_ID);//74 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_PARAM_1);//75 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_VERSION);//76 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_TYPE);//77 + /* TX TPH */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_TDESC);//78 + /* RX TPH */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RDESC);//79 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RHDR);//80 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RPL);//81 + + /* TDMA */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_CTL);//82 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(0));//83 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(1));//84 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PB_THRE(i));//85-92 + } + for (i = 0; i < 4; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_LLQ(i));//93-96 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_L);//97 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_H);//98 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_L);//99 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_H);//100 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_L);//101 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_H);//102 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_L);//103 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_H);//104 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_L);//105 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_H);//106 + for (i = 0; i < 64; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_INS(i));//107-234 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETAG_INS(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CTL);//235 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MMW);//236 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CFG(i));//237-244 + } + for (i = 0; i < 128; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VM_CREDIT(i));//245-372 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_EOF);//373 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_SOF);//374 + + /* RDMA */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CTL);//375 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(0));//376 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(1));//377 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_RSC_CTL);//378 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CFG(i));//379-386 + } + for (i = 0; i < 4; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_QDE(i));//387-394 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_HIDE(i)); + } + + /* RDB */ + /*flow control */ + for (i = 0; i < 4; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCV(i));//395-398 + } + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCL(i));//399-414 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCH(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCRT);//415 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCC);//416 + /* receive packet buffer */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_CTL);//417 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_WRAP);//418 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_UP2TC);//419 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_SZ(i));//420-435 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_MPCNT(i)); + } + /* lli interrupt */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_LLI_THRE);//436 + /* ring assignment */ + for (i = 0; i < 64; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PL_CFG(i));//437-500 + } + for (i = 0; i < 32; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSTBL(i));//501-532 + } + for (i = 0; i < 10; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSRK(i));//533-542 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSS_TC);//543 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RA_CTL);//544 + for (i = 0; i < 128; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SA(i));//545-1184 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_DA(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SDP(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL0(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL1(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_SYN_CLS);//1185 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_ETYPE_CLS(i));//1186-1193 + } + /* fcoe redirection table */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_CTL);//1194 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_TBL(i));//1195-1202 + } + /*flow director */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CTL);//1203 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HKEY);//1204 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SKEY);//1205 + for (i = 0; i < 16; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX_CFG(i));//1206-1221 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FREE);//1222 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_LEN);//1223 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_USE_ST);//1224 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FAIL_ST);//1225 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MATCH);//1226 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MISS);//1227 + for (i = 0; i < 3; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6(i));//1228-1230 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA);//1231 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA);//1232 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_PORT);//1233 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX);//1234 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HASH);//1235 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CMD);//1236 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA4_MSK);//1237 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA4_MSK);//1238 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_TCP_MSK);//1239 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_UDP_MSK);//1240 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SCTP_MSK);//1241 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6_MSK);//1242 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_OTHER_MSK);//1243 + + /* PSR */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_CTL);//1244 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_CTL);//1245 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_CTL);//1246 + for (i = 0; i < 64; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_L2CTL(i));//1247-1310 + } + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_ETYPE_SWC(i));//1311-1318 + } + for (i = 0; i < 128; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MC_TBL(i));//1319-1702 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_UC_TBL(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_TBL(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_L);//1703 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_H);//1704 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_L);//1705 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_H);//1706 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_IDX);//1707 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC);//1708 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_L);//1709 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_H);//1710 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_IDX);//1711 + for (i = 0; i < 4; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_CTL(i));//1712-1731 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_L(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_H(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_L(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_H(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_CTL);//1732 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPL);//1733 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPH);//1734 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRL);//1735 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRH);//1736 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_MSGTYPE);//1737 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_CTL);//1738 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IPV);//1739 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);//1740 + for (i = 0; i < 4; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP4TBL(i));//1741-1748 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP6TBL(i)); + } + for (i = 0; i < 16; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_L(i));//1749-1796 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_H(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_MSK(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);//1797 + + /* TDB */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_RFCS);//1798 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PB_SZ(0));//1799 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_UP2TC);//1800 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CTL);//1801 + for (i = 0; i < 8; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CFG(i));//1802-1809 + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_MNG_TC);//1810 + + /* tsec */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_CTL);//1811 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_ST);//1812 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AF);//1813 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AE);//1814 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_MIN_IFG);//1815 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_CTL);//1816 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPL);//1817 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPH);//1818 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIML);//1819 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIMH);//1820 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_INC);//1821 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJL);//1822 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJH);//1823 + + /* RSEC */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_CTL);//1824 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_ST);//1825 + + /* BAR register */ + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IC);//1826 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_ICS);//1827 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IEN);//1828 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_GPIE);//1829 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(0));//1830 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(1));//1831 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(0));//1832 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(1));//1833 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(0));//1834 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(1));//1835 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(0));//1836 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(1));//1837 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_L);//1838 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_H);//1839 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITRSEL);//1840 + for (i = 0; i < 64; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITR(i));//1841-1968 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IVAR(i)); + } + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IVAR);//1969 + for (i = 0; i < 128; i++) { + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAL(i));//1970-3249 + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAH(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_WP(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_RP(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_CFG(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAL(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAH(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_WP(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_RP(i)); + regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_CFG(i)); + } +} + +static int txgbe_get_eeprom_len(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + return adapter->hw.eeprom.word_size * 2; +} + +static int txgbe_get_eeprom(struct net_device *netdev, + struct ethtool_eeprom *eeprom, u8 *bytes) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u16 *eeprom_buff; + int first_word, last_word, eeprom_len; + int ret_val = 0; + u16 i; + + if (eeprom->len == 0) + return -EINVAL; + + eeprom->magic = hw->vendor_id | (hw->device_id << 16); + + first_word = eeprom->offset >> 1; + last_word = (eeprom->offset + eeprom->len - 1) >> 1; + eeprom_len = last_word - first_word + 1; + + eeprom_buff = kmalloc(sizeof(u16) * eeprom_len, GFP_KERNEL); + if (!eeprom_buff) + return -ENOMEM; + + ret_val = TCALL(hw, eeprom.ops.read_buffer, first_word, eeprom_len, + eeprom_buff); + + /* Device's eeprom is always little-endian, word addressable */ + for (i = 0; i < eeprom_len; i++) + le16_to_cpus(&eeprom_buff[i]); + + memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len); + kfree(eeprom_buff); + + return ret_val; +} + +static int txgbe_set_eeprom(struct net_device *netdev, + struct ethtool_eeprom *eeprom, u8 *bytes) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u16 *eeprom_buff; + void *ptr; + int max_len, first_word, last_word, ret_val = 0; + u16 i; + + if (eeprom->len == 0) + return -EINVAL; + + if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16))) + return -EINVAL; + + max_len = hw->eeprom.word_size * 2; + + first_word = eeprom->offset >> 1; + last_word = (eeprom->offset + eeprom->len - 1) >> 1; + eeprom_buff = kmalloc(max_len, GFP_KERNEL); + if (!eeprom_buff) + return -ENOMEM; + + ptr = eeprom_buff; + + if (eeprom->offset & 1) { + /* + * need read/modify/write of first changed EEPROM word + * only the second byte of the word is being modified + */ + ret_val = TCALL(hw, eeprom.ops.read, first_word, + &eeprom_buff[0]); + if (ret_val) + goto err; + + ptr++; + } + if (((eeprom->offset + eeprom->len) & 1) && (ret_val == 0)) { + /* + * need read/modify/write of last changed EEPROM word + * only the first byte of the word is being modified + */ + ret_val = TCALL(hw, eeprom.ops.read, last_word, + &eeprom_buff[last_word - first_word]); + if (ret_val) + goto err; + } + + /* Device's eeprom is always little-endian, word addressable */ + for (i = 0; i < last_word - first_word + 1; i++) + le16_to_cpus(&eeprom_buff[i]); + + memcpy(ptr, bytes, eeprom->len); + + for (i = 0; i < last_word - first_word + 1; i++) + cpu_to_le16s(&eeprom_buff[i]); + + ret_val = TCALL(hw, eeprom.ops.write_buffer, first_word, + last_word - first_word + 1, + eeprom_buff); + + /* Update the checksum */ + if (ret_val == 0) + TCALL(hw, eeprom.ops.update_checksum); + +err: + kfree(eeprom_buff); + return ret_val; +} + +static void txgbe_get_drvinfo(struct net_device *netdev, + struct ethtool_drvinfo *drvinfo) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + strncpy(drvinfo->driver, txgbe_driver_name, + sizeof(drvinfo->driver) - 1); + strncpy(drvinfo->version, txgbe_driver_version, + sizeof(drvinfo->version) - 1); + strncpy(drvinfo->fw_version, adapter->eeprom_id, + sizeof(drvinfo->fw_version)); + strncpy(drvinfo->bus_info, pci_name(adapter->pdev), + sizeof(drvinfo->bus_info) - 1); + if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) { + drvinfo->n_stats = TXGBE_STATS_LEN - + (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues)* + (sizeof(struct txgbe_queue_stats) / sizeof(u64))*2; + } else { + drvinfo->n_stats = TXGBE_STATS_LEN; + } + drvinfo->testinfo_len = TXGBE_TEST_LEN; + drvinfo->regdump_len = txgbe_get_regs_len(netdev); +} + +static void txgbe_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + ring->rx_max_pending = TXGBE_MAX_RXD; + ring->tx_max_pending = TXGBE_MAX_TXD; + ring->rx_mini_max_pending = 0; + ring->rx_jumbo_max_pending = 0; + ring->rx_pending = adapter->rx_ring_count; + ring->tx_pending = adapter->tx_ring_count; + ring->rx_mini_pending = 0; + ring->rx_jumbo_pending = 0; +} + +static int txgbe_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_ring *temp_ring; + int i, err = 0; + u32 new_rx_count, new_tx_count; + + if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) + return -EINVAL; + + new_tx_count = clamp_t(u32, ring->tx_pending, + TXGBE_MIN_TXD, TXGBE_MAX_TXD); + new_tx_count = ALIGN(new_tx_count, TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE); + + new_rx_count = clamp_t(u32, ring->rx_pending, + TXGBE_MIN_RXD, TXGBE_MAX_RXD); + new_rx_count = ALIGN(new_rx_count, TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE); + + if ((new_tx_count == adapter->tx_ring_count) && + (new_rx_count == adapter->rx_ring_count)) { + /* nothing to do */ + return 0; + } + + while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state)) + usleep_range(1000, 2000); + + if (!netif_running(adapter->netdev)) { + for (i = 0; i < adapter->num_tx_queues; i++) + adapter->tx_ring[i]->count = new_tx_count; + for (i = 0; i < adapter->num_rx_queues; i++) + adapter->rx_ring[i]->count = new_rx_count; + adapter->tx_ring_count = new_tx_count; + adapter->rx_ring_count = new_rx_count; + goto clear_reset; + } + + /* allocate temporary buffer to store rings in */ + i = max_t(int, adapter->num_tx_queues, adapter->num_rx_queues); + temp_ring = vmalloc(i * sizeof(struct txgbe_ring)); + + if (!temp_ring) { + err = -ENOMEM; + goto clear_reset; + } + + txgbe_down(adapter); + + /* + * Setup new Tx resources and free the old Tx resources in that order. + * We can then assign the new resources to the rings via a memcpy. + * The advantage to this approach is that we are guaranteed to still + * have resources even in the case of an allocation failure. + */ + if (new_tx_count != adapter->tx_ring_count) { + for (i = 0; i < adapter->num_tx_queues; i++) { + memcpy(&temp_ring[i], adapter->tx_ring[i], + sizeof(struct txgbe_ring)); + + temp_ring[i].count = new_tx_count; + err = txgbe_setup_tx_resources(&temp_ring[i]); + if (err) { + while (i) { + i--; + txgbe_free_tx_resources(&temp_ring[i]); + } + goto err_setup; + } + } + + for (i = 0; i < adapter->num_tx_queues; i++) { + txgbe_free_tx_resources(adapter->tx_ring[i]); + + memcpy(adapter->tx_ring[i], &temp_ring[i], + sizeof(struct txgbe_ring)); + } + + adapter->tx_ring_count = new_tx_count; + } + + /* Repeat the process for the Rx rings if needed */ + if (new_rx_count != adapter->rx_ring_count) { + for (i = 0; i < adapter->num_rx_queues; i++) { + memcpy(&temp_ring[i], adapter->rx_ring[i], + sizeof(struct txgbe_ring)); + + temp_ring[i].count = new_rx_count; + err = txgbe_setup_rx_resources(&temp_ring[i]); + if (err) { + while (i) { + i--; + txgbe_free_rx_resources(&temp_ring[i]); + } + goto err_setup; + } + } + + for (i = 0; i < adapter->num_rx_queues; i++) { + txgbe_free_rx_resources(adapter->rx_ring[i]); + memcpy(adapter->rx_ring[i], &temp_ring[i], + sizeof(struct txgbe_ring)); + } + + adapter->rx_ring_count = new_rx_count; + } + +err_setup: + txgbe_up(adapter); + vfree(temp_ring); +clear_reset: + clear_bit(__TXGBE_RESETTING, &adapter->state); + return err; +} + +static int txgbe_get_sset_count(struct net_device *netdev, int sset) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + switch (sset) { + case ETH_SS_TEST: + return TXGBE_TEST_LEN; + case ETH_SS_STATS: + if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) { + return TXGBE_STATS_LEN - (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues) * + (sizeof(struct txgbe_queue_stats) / sizeof(u64)) * 2; + } else { + return TXGBE_STATS_LEN; + } + default: + return -EOPNOTSUPP; + } +} + +static void txgbe_get_ethtool_stats(struct net_device *netdev, + struct ethtool_stats *stats, u64 *data) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct rtnl_link_stats64 temp; + const struct rtnl_link_stats64 *net_stats; + + u64 *queue_stat; + int stat_count, k; + unsigned int start; + struct txgbe_ring *ring; + int i, j; + char *p; + + txgbe_update_stats(adapter); + net_stats = dev_get_stats(netdev, &temp); + + for (i = 0; i < TXGBE_NETDEV_STATS_LEN; i++) { + p = (char *)net_stats + txgbe_gstrings_net_stats[i].stat_offset; + data[i] = (txgbe_gstrings_net_stats[i].sizeof_stat == + sizeof(u64)) ? *(u64 *)p : *(u32 *)p; + } + + for (j = 0; j < TXGBE_GLOBAL_STATS_LEN; j++, i++) { + p = (char *)adapter + txgbe_gstrings_stats[j].stat_offset; + data[i] = (txgbe_gstrings_stats[j].sizeof_stat == + sizeof(u64)) ? *(u64 *)p : *(u32 *)p; + } + + for (j = 0; j < adapter->num_tx_queues; j++) { + ring = adapter->tx_ring[j]; + if (!ring) { + data[i++] = 0; + data[i++] = 0; +#ifdef BP_EXTENDED_STATS + data[i++] = 0; + data[i++] = 0; + data[i++] = 0; +#endif + continue; + } + + do { + start = u64_stats_fetch_begin_irq(&ring->syncp); + data[i] = ring->stats.packets; + data[i+1] = ring->stats.bytes; + } while (u64_stats_fetch_retry_irq(&ring->syncp, start)); + i += 2; + } + for (j = 0; j < adapter->num_rx_queues; j++) { + ring = adapter->rx_ring[j]; + if (!ring) { + data[i++] = 0; + data[i++] = 0; + continue; + } + + do { + start = u64_stats_fetch_begin_irq(&ring->syncp); + data[i] = ring->stats.packets; + data[i+1] = ring->stats.bytes; + } while (u64_stats_fetch_retry_irq(&ring->syncp, start)); + i += 2; + } + for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) { + data[i++] = adapter->stats.pxontxc[j]; + data[i++] = adapter->stats.pxofftxc[j]; + } + for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) { + data[i++] = adapter->stats.pxonrxc[j]; + data[i++] = adapter->stats.pxoffrxc[j]; + } + + stat_count = sizeof(struct vf_stats) / sizeof(u64); + for (j = 0; j < adapter->num_vfs; j++) { + queue_stat = (u64 *)&adapter->vfinfo[j].vfstats; + for (k = 0; k < stat_count; k++) + data[i + k] = queue_stat[k]; + queue_stat = (u64 *)&adapter->vfinfo[j].saved_rst_vfstats; + for (k = 0; k < stat_count; k++) + data[i + k] += queue_stat[k]; + i += k; + } +} + +static void txgbe_get_strings(struct net_device *netdev, u32 stringset, + u8 *data) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + char *p = (char *)data; + int i; + + switch (stringset) { + case ETH_SS_TEST: + memcpy(data, *txgbe_gstrings_test, + TXGBE_TEST_LEN * ETH_GSTRING_LEN); + break; + case ETH_SS_STATS: + for (i = 0; i < TXGBE_NETDEV_STATS_LEN; i++) { + memcpy(p, txgbe_gstrings_net_stats[i].stat_string, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < TXGBE_GLOBAL_STATS_LEN; i++) { + memcpy(p, txgbe_gstrings_stats[i].stat_string, + ETH_GSTRING_LEN); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < adapter->num_tx_queues; i++) { + sprintf(p, "tx_queue_%u_packets", i); + p += ETH_GSTRING_LEN; + sprintf(p, "tx_queue_%u_bytes", i); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < adapter->num_rx_queues; i++) { + sprintf(p, "rx_queue_%u_packets", i); + p += ETH_GSTRING_LEN; + sprintf(p, "rx_queue_%u_bytes", i); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) { + sprintf(p, "tx_pb_%u_pxon", i); + p += ETH_GSTRING_LEN; + sprintf(p, "tx_pb_%u_pxoff", i); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) { + sprintf(p, "rx_pb_%u_pxon", i); + p += ETH_GSTRING_LEN; + sprintf(p, "rx_pb_%u_pxoff", i); + p += ETH_GSTRING_LEN; + } + for (i = 0; i < adapter->num_vfs; i++) { + sprintf(p, "VF %d Rx Packets", i); + p += ETH_GSTRING_LEN; + sprintf(p, "VF %d Rx Bytes", i); + p += ETH_GSTRING_LEN; + sprintf(p, "VF %d Tx Packets", i); + p += ETH_GSTRING_LEN; + sprintf(p, "VF %d Tx Bytes", i); + p += ETH_GSTRING_LEN; + sprintf(p, "VF %d MC Packets", i); + p += ETH_GSTRING_LEN; + } + /* BUG_ON(p - data != TXGBE_STATS_LEN * ETH_GSTRING_LEN); */ + break; + } +} + +static int txgbe_link_test(struct txgbe_adapter *adapter, u64 *data) +{ + struct txgbe_hw *hw = &adapter->hw; + bool link_up; + u32 link_speed = 0; + + if (TXGBE_REMOVED(hw->hw_addr)) { + *data = 1; + return 1; + } + *data = 0; + TCALL(hw, mac.ops.check_link, &link_speed, &link_up, true); + if (link_up) + return *data; + else + *data = 1; + return *data; +} + +/* ethtool register test data */ +struct txgbe_reg_test { + u32 reg; + u8 array_len; + u8 test_type; + u32 mask; + u32 write; +}; + +/* In the hardware, registers are laid out either singly, in arrays + * spaced 0x40 bytes apart, or in contiguous tables. We assume + * most tests take place on arrays or single registers (handled + * as a single-element array) and special-case the tables. + * Table tests are always pattern tests. + * + * We also make provision for some required setup steps by specifying + * registers to be written without any read-back testing. + */ + +#define PATTERN_TEST 1 +#define SET_READ_TEST 2 +#define WRITE_NO_TEST 3 +#define TABLE32_TEST 4 +#define TABLE64_TEST_LO 5 +#define TABLE64_TEST_HI 6 + +/* default sapphire register test */ +static struct txgbe_reg_test reg_test_sapphire[] = { + { TXGBE_RDB_RFCL(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 }, + { TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 }, + { TXGBE_PSR_VLAN_CTL, 1, PATTERN_TEST, 0x00000000, 0x00000000 }, + { TXGBE_PX_RR_BAL(0), 4, PATTERN_TEST, 0xFFFFFF80, 0xFFFFFF80 }, + { TXGBE_PX_RR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF }, + { TXGBE_PX_RR_CFG(0), 4, WRITE_NO_TEST, 0, TXGBE_PX_RR_CFG_RR_EN }, + { TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 }, + { TXGBE_RDB_RFCV(0), 1, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF }, + { TXGBE_PX_TR_BAL(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF }, + { TXGBE_PX_TR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF }, + { TXGBE_RDB_PB_CTL, 1, SET_READ_TEST, 0x00000001, 0x00000001 }, + { TXGBE_PSR_MC_TBL(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF }, + { .reg = 0 } +}; + +static bool reg_pattern_test(struct txgbe_adapter *adapter, u64 *data, int reg, + u32 mask, u32 write) +{ + u32 pat, val, before; + static const u32 test_pattern[] = { + 0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF + }; + + if (TXGBE_REMOVED(adapter->hw.hw_addr)) { + *data = 1; + return true; + } + for (pat = 0; pat < ARRAY_SIZE(test_pattern); pat++) { + before = rd32(&adapter->hw, reg); + wr32(&adapter->hw, reg, test_pattern[pat] & write); + val = rd32(&adapter->hw, reg); + if (val != (test_pattern[pat] & write & mask)) { + e_err(drv, + "pattern test reg %04X failed: got 0x%08X " + "expected 0x%08X\n", + reg, val, test_pattern[pat] & write & mask); + *data = reg; + wr32(&adapter->hw, reg, before); + return true; + } + wr32(&adapter->hw, reg, before); + } + return false; +} + +static bool reg_set_and_check(struct txgbe_adapter *adapter, u64 *data, int reg, + u32 mask, u32 write) +{ + u32 val, before; + + if (TXGBE_REMOVED(adapter->hw.hw_addr)) { + *data = 1; + return true; + } + before = rd32(&adapter->hw, reg); + wr32(&adapter->hw, reg, write & mask); + val = rd32(&adapter->hw, reg); + if ((write & mask) != (val & mask)) { + e_err(drv, + "set/check reg %04X test failed: got 0x%08X expected" + "0x%08X\n", + reg, (val & mask), (write & mask)); + *data = reg; + wr32(&adapter->hw, reg, before); + return true; + } + wr32(&adapter->hw, reg, before); + return false; +} + +static bool txgbe_reg_test(struct txgbe_adapter *adapter, u64 *data) +{ + struct txgbe_reg_test *test; + struct txgbe_hw *hw = &adapter->hw; + u32 i; + + if (TXGBE_REMOVED(hw->hw_addr)) { + e_err(drv, "Adapter removed - register test blocked\n"); + *data = 1; + return true; + } + + test = reg_test_sapphire; + + /* + * Perform the remainder of the register test, looping through + * the test table until we either fail or reach the null entry. + */ + while (test->reg) { + for (i = 0; i < test->array_len; i++) { + bool b = false; + + switch (test->test_type) { + case PATTERN_TEST: + b = reg_pattern_test(adapter, data, + test->reg + (i * 0x40), + test->mask, + test->write); + break; + case SET_READ_TEST: + b = reg_set_and_check(adapter, data, + test->reg + (i * 0x40), + test->mask, + test->write); + break; + case WRITE_NO_TEST: + wr32(hw, test->reg + (i * 0x40), + test->write); + break; + case TABLE32_TEST: + b = reg_pattern_test(adapter, data, + test->reg + (i * 4), + test->mask, + test->write); + break; + case TABLE64_TEST_LO: + b = reg_pattern_test(adapter, data, + test->reg + (i * 8), + test->mask, + test->write); + break; + case TABLE64_TEST_HI: + b = reg_pattern_test(adapter, data, + (test->reg + 4) + (i * 8), + test->mask, + test->write); + break; + } + if (b) + return true; + } + test++; + } + + *data = 0; + return false; +} + +static bool txgbe_eeprom_test(struct txgbe_adapter *adapter, u64 *data) +{ + struct txgbe_hw *hw = &adapter->hw; + + if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) { + *data = 1; + return true; + } else { + *data = 0; + return false; + } +} + +static irqreturn_t txgbe_test_intr(int __always_unused irq, void *data) +{ + struct net_device *netdev = (struct net_device *) data; + struct txgbe_adapter *adapter = netdev_priv(netdev); + u64 icr; + + /* get misc interrupt, as cannot get ring interrupt status */ + icr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC1); + icr <<= 32; + icr |= txgbe_misc_isb(adapter, TXGBE_ISB_VEC0); + + adapter->test_icr = icr; + + return IRQ_HANDLED; +} + +static int txgbe_intr_test(struct txgbe_adapter *adapter, u64 *data) +{ + struct net_device *netdev = adapter->netdev; + u64 mask; + u32 i = 0, shared_int = true; + u32 irq = adapter->pdev->irq; + + if (TXGBE_REMOVED(adapter->hw.hw_addr)) { + *data = 1; + return -1; + } + *data = 0; + + /* Hook up test interrupt handler just for this test */ + if (adapter->msix_entries) { + /* NOTE: we don't test MSI-X interrupts here, yet */ + return 0; + } else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) { + shared_int = false; + if (request_irq(irq, &txgbe_test_intr, 0, netdev->name, + netdev)) { + *data = 1; + return -1; + } + } else if (!request_irq(irq, &txgbe_test_intr, IRQF_PROBE_SHARED, + netdev->name, netdev)) { + shared_int = false; + } else if (request_irq(irq, &txgbe_test_intr, IRQF_SHARED, + netdev->name, netdev)) { + *data = 1; + return -1; + } + e_info(hw, "testing %s interrupt\n", + (shared_int ? "shared" : "unshared")); + + /* Disable all the interrupts */ + txgbe_irq_disable(adapter); + TXGBE_WRITE_FLUSH(&adapter->hw); + usleep_range(10000, 20000); + + /* Test each interrupt */ + for (; i < 1; i++) { + /* Interrupt to test */ + mask = 1ULL << i; + + if (!shared_int) { + /* + * Disable the interrupts to be reported in + * the cause register and then force the same + * interrupt and see if one gets posted. If + * an interrupt was posted to the bus, the + * test failed. + */ + adapter->test_icr = 0; + txgbe_intr_disable(&adapter->hw, ~mask); + txgbe_intr_trigger(&adapter->hw, mask); + TXGBE_WRITE_FLUSH(&adapter->hw); + usleep_range(10000, 20000); + + if (adapter->test_icr & mask) { + *data = 3; + break; + } + } + + /* + * Enable the interrupt to be reported in the cause + * register and then force the same interrupt and see + * if one gets posted. If an interrupt was not posted + * to the bus, the test failed. + */ + adapter->test_icr = 0; + txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL); + txgbe_intr_trigger(&adapter->hw, mask); + TXGBE_WRITE_FLUSH(&adapter->hw); + usleep_range(10000, 20000); + + if (!(adapter->test_icr & mask)) { + *data = 4; + break; + } + } + + /* Disable all the interrupts */ + txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL); + TXGBE_WRITE_FLUSH(&adapter->hw); + usleep_range(10000, 20000); + + /* Unhook test interrupt handler */ + free_irq(irq, netdev); + + return *data; +} + +static void txgbe_free_desc_rings(struct txgbe_adapter *adapter) +{ + struct txgbe_ring *tx_ring = &adapter->test_tx_ring; + struct txgbe_ring *rx_ring = &adapter->test_rx_ring; + struct txgbe_hw *hw = &adapter->hw; + + /* shut down the DMA engines now so they can be reinitialized later */ + + /* first Rx */ + TCALL(hw, mac.ops.disable_rx); + txgbe_disable_rx_queue(adapter, rx_ring); + + /* now Tx */ + wr32(hw, TXGBE_PX_TR_CFG(tx_ring->reg_idx), 0); + + wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0); + + txgbe_reset(adapter); + + txgbe_free_tx_resources(&adapter->test_tx_ring); + txgbe_free_rx_resources(&adapter->test_rx_ring); +} + +static int txgbe_setup_desc_rings(struct txgbe_adapter *adapter) +{ + struct txgbe_ring *tx_ring = &adapter->test_tx_ring; + struct txgbe_ring *rx_ring = &adapter->test_rx_ring; + struct txgbe_hw *hw = &adapter->hw; + int ret_val; + int err; + + TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL); + + /* Setup Tx descriptor ring and Tx buffers */ + tx_ring->count = TXGBE_DEFAULT_TXD; + tx_ring->queue_index = 0; + tx_ring->dev = pci_dev_to_dev(adapter->pdev); + tx_ring->netdev = adapter->netdev; + tx_ring->reg_idx = adapter->tx_ring[0]->reg_idx; + + err = txgbe_setup_tx_resources(tx_ring); + if (err) + return 1; + + wr32m(&adapter->hw, TXGBE_TDM_CTL, + TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE); + + txgbe_configure_tx_ring(adapter, tx_ring); + + /* enable mac transmitter */ + wr32m(hw, TXGBE_MAC_TX_CFG, + TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_MASK, + TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_10G); + + /* Setup Rx Descriptor ring and Rx buffers */ + rx_ring->count = TXGBE_DEFAULT_RXD; + rx_ring->queue_index = 0; + rx_ring->dev = pci_dev_to_dev(adapter->pdev); + rx_ring->netdev = adapter->netdev; + rx_ring->reg_idx = adapter->rx_ring[0]->reg_idx; + + err = txgbe_setup_rx_resources(rx_ring); + if (err) { + ret_val = 4; + goto err_nomem; + } + + TCALL(hw, mac.ops.disable_rx); + + txgbe_configure_rx_ring(adapter, rx_ring); + + TCALL(hw, mac.ops.enable_rx); + + return 0; + +err_nomem: + txgbe_free_desc_rings(adapter); + return ret_val; +} + +static int txgbe_setup_config(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 reg_data; + + /* Setup traffic loopback */ + reg_data = rd32(hw, TXGBE_PSR_CTL); + reg_data |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_UPE | + TXGBE_PSR_CTL_MPE | TXGBE_PSR_CTL_TPE; + wr32(hw, TXGBE_PSR_CTL, reg_data); + + wr32(hw, TXGBE_RSC_CTL, + (rd32(hw, TXGBE_RSC_CTL) | + TXGBE_RSC_CTL_SAVE_MAC_ERR) & ~TXGBE_RSC_CTL_SECRX_DIS); + + wr32(hw, TXGBE_RSC_LSEC_CTL, 0x4); + + wr32(hw, TXGBE_PSR_VLAN_CTL, + rd32(hw, TXGBE_PSR_VLAN_CTL) & + ~TXGBE_PSR_VLAN_CTL_VFE); + + wr32m(&adapter->hw, TXGBE_MAC_RX_CFG, + TXGBE_MAC_RX_CFG_LM, ~TXGBE_MAC_RX_CFG_LM); + wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL, + TXGBE_CFG_PORT_CTL_FORCE_LKUP, ~TXGBE_CFG_PORT_CTL_FORCE_LKUP); + + + TXGBE_WRITE_FLUSH(hw); + usleep_range(10000, 20000); + + return 0; +} + +static int txgbe_setup_phy_loopback_test(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 value; + /* setup phy loopback */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0); + value |= TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 | + TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1; + + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value); + + value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1); + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, + value | TXGBE_SR_PMA_MMD_CTL1_LB_EN); + return 0; +} + +static void txgbe_phy_loopback_cleanup(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 value; + + value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0); + value &= ~(TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 | + TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1); + + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value); + value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1); + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, + value & ~TXGBE_SR_PMA_MMD_CTL1_LB_EN); +} + + +static void txgbe_create_lbtest_frame(struct sk_buff *skb, + unsigned int frame_size) +{ + memset(skb->data, 0xFF, frame_size); + frame_size >>= 1; + memset(&skb->data[frame_size], 0xAA, frame_size / 2 - 1); + memset(&skb->data[frame_size + 10], 0xBE, 1); + memset(&skb->data[frame_size + 12], 0xAF, 1); +} + +static bool txgbe_check_lbtest_frame(struct txgbe_rx_buffer *rx_buffer, + unsigned int frame_size) +{ + unsigned char *data; + bool match = true; + + frame_size >>= 1; + data = kmap(rx_buffer->page) + rx_buffer->page_offset; + + if (data[3] != 0xFF || + data[frame_size + 10] != 0xBE || + data[frame_size + 12] != 0xAF) + match = false; + + kunmap(rx_buffer->page); + return match; +} + +static u16 txgbe_clean_test_rings(struct txgbe_ring *rx_ring, + struct txgbe_ring *tx_ring, + unsigned int size) +{ + union txgbe_rx_desc *rx_desc; + struct txgbe_rx_buffer *rx_buffer; + struct txgbe_tx_buffer *tx_buffer; + const int bufsz = txgbe_rx_bufsz(rx_ring); + u16 rx_ntc, tx_ntc, count = 0; + + /* initialize next to clean and descriptor values */ + rx_ntc = rx_ring->next_to_clean; + tx_ntc = tx_ring->next_to_clean; + rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc); + + while (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD)) { + /* unmap buffer on Tx side */ + tx_buffer = &tx_ring->tx_buffer_info[tx_ntc]; + txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer); + + /* check Rx buffer */ + rx_buffer = &rx_ring->rx_buffer_info[rx_ntc]; + + /* sync Rx buffer for CPU read */ + dma_sync_single_for_cpu(rx_ring->dev, + rx_buffer->page_dma, + bufsz, + DMA_FROM_DEVICE); + + /* verify contents of skb */ + if (txgbe_check_lbtest_frame(rx_buffer, size)) + count++; + + /* sync Rx buffer for device write */ + dma_sync_single_for_device(rx_ring->dev, + rx_buffer->page_dma, + bufsz, + DMA_FROM_DEVICE); + + /* increment Rx/Tx next to clean counters */ + rx_ntc++; + if (rx_ntc == rx_ring->count) + rx_ntc = 0; + tx_ntc++; + if (tx_ntc == tx_ring->count) + tx_ntc = 0; + + /* fetch next descriptor */ + rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc); + } + + /* re-map buffers to ring, store next to clean values */ + txgbe_alloc_rx_buffers(rx_ring, count); + rx_ring->next_to_clean = rx_ntc; + tx_ring->next_to_clean = tx_ntc; + + return count; +} + +static int txgbe_run_loopback_test(struct txgbe_adapter *adapter) +{ + struct txgbe_ring *tx_ring = &adapter->test_tx_ring; + struct txgbe_ring *rx_ring = &adapter->test_rx_ring; + int i, j, lc, good_cnt, ret_val = 0; + unsigned int size = 1024; + netdev_tx_t tx_ret_val; + struct sk_buff *skb; + u32 flags_orig = adapter->flags; + + + /* DCB can modify the frames on Tx */ + adapter->flags &= ~TXGBE_FLAG_DCB_ENABLED; + + /* allocate test skb */ + skb = alloc_skb(size, GFP_KERNEL); + if (!skb) + return 11; + + /* place data into test skb */ + txgbe_create_lbtest_frame(skb, size); + skb_put(skb, size); + + /* + * Calculate the loop count based on the largest descriptor ring + * The idea is to wrap the largest ring a number of times using 64 + * send/receive pairs during each loop + */ + + if (rx_ring->count <= tx_ring->count) + lc = ((tx_ring->count / 64) * 2) + 1; + else + lc = ((rx_ring->count / 64) * 2) + 1; + + for (j = 0; j <= lc; j++) { + /* reset count of good packets */ + good_cnt = 0; + + /* place 64 packets on the transmit queue*/ + for (i = 0; i < 64; i++) { + skb_get(skb); + tx_ret_val = txgbe_xmit_frame_ring(skb, + adapter, + tx_ring); + if (tx_ret_val == NETDEV_TX_OK) + good_cnt++; + } + + if (good_cnt != 64) { + ret_val = 12; + break; + } + + /* allow 200 milliseconds for packets to go from Tx to Rx */ + msleep(200); + + good_cnt = txgbe_clean_test_rings(rx_ring, tx_ring, size); + if (j == 0) + continue; + else if (good_cnt != 64) { + ret_val = 13; + break; + } + } + + /* free the original skb */ + kfree_skb(skb); + adapter->flags = flags_orig; + + return ret_val; +} + +static int txgbe_loopback_test(struct txgbe_adapter *adapter, u64 *data) +{ + *data = txgbe_setup_desc_rings(adapter); + if (*data) + goto out; + + *data = txgbe_setup_config(adapter); + if (*data) + goto err_loopback; + + *data = txgbe_setup_phy_loopback_test(adapter); + if (*data) + goto err_loopback; + *data = txgbe_run_loopback_test(adapter); + if (*data) + e_info(hw, "phy loopback testing failed\n"); + txgbe_phy_loopback_cleanup(adapter); + +err_loopback: + txgbe_free_desc_rings(adapter); +out: + return *data; +} + +static void txgbe_diag_test(struct net_device *netdev, + struct ethtool_test *eth_test, u64 *data) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + bool if_running = netif_running(netdev); + struct txgbe_hw *hw = &adapter->hw; + + if (TXGBE_REMOVED(hw->hw_addr)) { + e_err(hw, "Adapter removed - test blocked\n"); + data[0] = 1; + data[1] = 1; + data[2] = 1; + data[3] = 1; + data[4] = 1; + eth_test->flags |= ETH_TEST_FL_FAILED; + return; + } + + set_bit(__TXGBE_TESTING, &adapter->state); + if (eth_test->flags == ETH_TEST_FL_OFFLINE) { + if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) { + int i; + for (i = 0; i < adapter->num_vfs; i++) { + if (adapter->vfinfo[i].clear_to_send) { + e_warn(drv, "Please take active VFS " + "offline and restart the " + "adapter before running NIC " + "diagnostics\n"); + data[0] = 1; + data[1] = 1; + data[2] = 1; + data[3] = 1; + data[4] = 1; + eth_test->flags |= ETH_TEST_FL_FAILED; + clear_bit(__TXGBE_TESTING, + &adapter->state); + goto skip_ol_tests; + } + } + } + + /* Offline tests */ + e_info(hw, "offline testing starting\n"); + + /* Link test performed before hardware reset so autoneg doesn't + * interfere with test result */ + if (txgbe_link_test(adapter, &data[4])) + eth_test->flags |= ETH_TEST_FL_FAILED; + + if (if_running) + /* indicate we're in test mode */ + txgbe_close(netdev); + else + txgbe_reset(adapter); + + e_info(hw, "register testing starting\n"); + if (txgbe_reg_test(adapter, &data[0])) + eth_test->flags |= ETH_TEST_FL_FAILED; + + txgbe_reset(adapter); + e_info(hw, "eeprom testing starting\n"); + if (txgbe_eeprom_test(adapter, &data[1])) + eth_test->flags |= ETH_TEST_FL_FAILED; + + txgbe_reset(adapter); + e_info(hw, "interrupt testing starting\n"); + if (txgbe_intr_test(adapter, &data[2])) + eth_test->flags |= ETH_TEST_FL_FAILED; + + if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) || + ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) { + /* If SRIOV or VMDq is enabled then skip MAC + * loopback diagnostic. */ + if (adapter->flags & (TXGBE_FLAG_SRIOV_ENABLED | + TXGBE_FLAG_VMDQ_ENABLED)) { + e_info(hw, "skip MAC loopback diagnostic in VT mode\n"); + data[3] = 0; + goto skip_loopback; + } + + txgbe_reset(adapter); + e_info(hw, "loopback testing starting\n"); + if (txgbe_loopback_test(adapter, &data[3])) + eth_test->flags |= ETH_TEST_FL_FAILED; + } + + data[3] = 0; +skip_loopback: + txgbe_reset(adapter); + + /* clear testing bit and return adapter to previous state */ + clear_bit(__TXGBE_TESTING, &adapter->state); + if (if_running) + txgbe_open(netdev); + else + TCALL(hw, mac.ops.disable_tx_laser); + } else { + e_info(hw, "online testing starting\n"); + + /* Online tests */ + if (txgbe_link_test(adapter, &data[4])) + eth_test->flags |= ETH_TEST_FL_FAILED; + + /* Offline tests aren't run; pass by default */ + data[0] = 0; + data[1] = 0; + data[2] = 0; + data[3] = 0; + + clear_bit(__TXGBE_TESTING, &adapter->state); + } + +skip_ol_tests: + msleep_interruptible(4 * 1000); +} + + +static int txgbe_wol_exclusion(struct txgbe_adapter *adapter, + struct ethtool_wolinfo *wol) +{ + int retval = 0; + + /* WOL not supported for all devices */ + if (!txgbe_wol_supported(adapter)) { + retval = 1; + wol->supported = 0; + } + + return retval; +} + +static void txgbe_get_wol(struct net_device *netdev, + struct ethtool_wolinfo *wol) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + + wol->supported = WAKE_UCAST | WAKE_MCAST | + WAKE_BCAST | WAKE_MAGIC; + wol->wolopts = 0; + + if (txgbe_wol_exclusion(adapter, wol) || + !device_can_wakeup(pci_dev_to_dev(adapter->pdev))) + return; + if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP) + return; + + if (adapter->wol & TXGBE_PSR_WKUP_CTL_EX) + wol->wolopts |= WAKE_UCAST; + if (adapter->wol & TXGBE_PSR_WKUP_CTL_MC) + wol->wolopts |= WAKE_MCAST; + if (adapter->wol & TXGBE_PSR_WKUP_CTL_BC) + wol->wolopts |= WAKE_BCAST; + if (adapter->wol & TXGBE_PSR_WKUP_CTL_MAG) + wol->wolopts |= WAKE_MAGIC; +} + +static int txgbe_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + + if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE)) + return -EOPNOTSUPP; + + if (txgbe_wol_exclusion(adapter, wol)) + return wol->wolopts ? -EOPNOTSUPP : 0; + if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP) + return -EOPNOTSUPP; + + adapter->wol = 0; + + if (wol->wolopts & WAKE_UCAST) + adapter->wol |= TXGBE_PSR_WKUP_CTL_EX; + if (wol->wolopts & WAKE_MCAST) + adapter->wol |= TXGBE_PSR_WKUP_CTL_MC; + if (wol->wolopts & WAKE_BCAST) + adapter->wol |= TXGBE_PSR_WKUP_CTL_BC; + if (wol->wolopts & WAKE_MAGIC) + adapter->wol |= TXGBE_PSR_WKUP_CTL_MAG; + + hw->wol_enabled = !!(adapter->wol); + wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol); + + device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol); + + return 0; +} + +static int txgbe_nway_reset(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + if (netif_running(netdev)) + txgbe_reinit_locked(adapter); + + return 0; +} + +static int txgbe_set_phys_id(struct net_device *netdev, + enum ethtool_phys_id_state state) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + + switch (state) { + case ETHTOOL_ID_ACTIVE: + adapter->led_reg = rd32(hw, TXGBE_CFG_LED_CTL); + return 2; + + case ETHTOOL_ID_ON: + TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP); + break; + + case ETHTOOL_ID_OFF: + TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP); + break; + + case ETHTOOL_ID_INACTIVE: + /* Restore LED settings */ + wr32(&adapter->hw, TXGBE_CFG_LED_CTL, + adapter->led_reg); + break; + } + + return 0; +} + +static int txgbe_get_coalesce(struct net_device *netdev, + struct ethtool_coalesce *ec) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + ec->tx_max_coalesced_frames_irq = adapter->tx_work_limit; + /* only valid if in constant ITR mode */ + if (adapter->rx_itr_setting <= 1) + ec->rx_coalesce_usecs = adapter->rx_itr_setting; + else + ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2; + + /* if in mixed tx/rx queues per vector mode, report only rx settings */ + if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count) + return 0; + + /* only valid if in constant ITR mode */ + if (adapter->tx_itr_setting <= 1) + ec->tx_coalesce_usecs = adapter->tx_itr_setting; + else + ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2; + + return 0; +} + +/* + * this function must be called before setting the new value of + * rx_itr_setting + */ +static bool txgbe_update_rsc(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + /* nothing to do if LRO or RSC are not enabled */ + if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) || + !(netdev->features & NETIF_F_LRO)) + return false; + + /* check the feature flag value and enable RSC if necessary */ + if (adapter->rx_itr_setting == 1 || + adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) { + if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) { + adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED; + e_info(probe, "rx-usecs value high enough " + "to re-enable RSC\n"); + return true; + } + /* if interrupt rate is too high then disable RSC */ + } else if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) { + adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED; + e_info(probe, "rx-usecs set too low, disabling RSC\n"); + return true; + } + return false; +} + +static int txgbe_set_coalesce(struct net_device *netdev, + struct ethtool_coalesce *ec) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_q_vector *q_vector; + int i; + u16 tx_itr_param, rx_itr_param; + u16 tx_itr_prev; + bool need_reset = false; + + if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count) { + /* reject Tx specific changes in case of mixed RxTx vectors */ + if (ec->tx_coalesce_usecs) + return -EINVAL; + tx_itr_prev = adapter->rx_itr_setting; + } else { + tx_itr_prev = adapter->tx_itr_setting; + } + + if (ec->tx_max_coalesced_frames_irq) + adapter->tx_work_limit = ec->tx_max_coalesced_frames_irq; + + if ((ec->rx_coalesce_usecs > (TXGBE_MAX_EITR >> 2)) || + (ec->tx_coalesce_usecs > (TXGBE_MAX_EITR >> 2))) + return -EINVAL; + + if (ec->rx_coalesce_usecs > 1) + adapter->rx_itr_setting = ec->rx_coalesce_usecs << 2; + else + adapter->rx_itr_setting = ec->rx_coalesce_usecs; + + if (adapter->rx_itr_setting == 1) + rx_itr_param = TXGBE_20K_ITR; + else + rx_itr_param = adapter->rx_itr_setting; + + if (ec->tx_coalesce_usecs > 1) + adapter->tx_itr_setting = ec->tx_coalesce_usecs << 2; + else + adapter->tx_itr_setting = ec->tx_coalesce_usecs; + + if (adapter->tx_itr_setting == 1) + tx_itr_param = TXGBE_12K_ITR; + else + tx_itr_param = adapter->tx_itr_setting; + + /* mixed Rx/Tx */ + if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count) + adapter->tx_itr_setting = adapter->rx_itr_setting; + + /* detect ITR changes that require update of TXDCTL.WTHRESH */ + if ((adapter->tx_itr_setting != 1) && + (adapter->tx_itr_setting < TXGBE_100K_ITR)) { + if ((tx_itr_prev == 1) || + (tx_itr_prev >= TXGBE_100K_ITR)) + need_reset = true; + } else { + if ((tx_itr_prev != 1) && + (tx_itr_prev < TXGBE_100K_ITR)) + need_reset = true; + } + + /* check the old value and enable RSC if necessary */ + need_reset |= txgbe_update_rsc(adapter); + + if (adapter->hw.mac.dmac_config.watchdog_timer && + (!adapter->rx_itr_setting && !adapter->tx_itr_setting)) { + e_info(probe, + "Disabling DMA coalescing because interrupt throttling " + "is disabled\n"); + adapter->hw.mac.dmac_config.watchdog_timer = 0; + TCALL(hw, mac.ops.dmac_config); + } + + for (i = 0; i < adapter->num_q_vectors; i++) { + q_vector = adapter->q_vector[i]; + q_vector->tx.work_limit = adapter->tx_work_limit; + q_vector->rx.work_limit = adapter->rx_work_limit; + if (q_vector->tx.count && !q_vector->rx.count) + /* tx only */ + q_vector->itr = tx_itr_param; + else + /* rx only or mixed */ + q_vector->itr = rx_itr_param; + txgbe_write_eitr(q_vector); + } + + /* + * do reset here at the end to make sure EITR==0 case is handled + * correctly w.r.t stopping tx, and changing TXDCTL.WTHRESH settings + * also locks in RSC enable/disable which requires reset + */ + if (need_reset) + txgbe_do_reset(netdev); + + return 0; +} + +static int txgbe_get_ethtool_fdir_entry(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *cmd) +{ + union txgbe_atr_input *mask = &adapter->fdir_mask; + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + struct hlist_node *node; + struct txgbe_fdir_filter *rule = NULL; + + /* report total rule count */ + cmd->data = (1024 << adapter->fdir_pballoc) - 2; + + hlist_for_each_entry_safe(rule, node, + &adapter->fdir_filter_list, fdir_node) { + if (fsp->location <= rule->sw_idx) + break; + } + + if (!rule || fsp->location != rule->sw_idx) + return -EINVAL; + + /* fill out the flow spec entry */ + + /* set flow type field */ + switch (rule->filter.formatted.flow_type) { + case TXGBE_ATR_FLOW_TYPE_TCPV4: + fsp->flow_type = TCP_V4_FLOW; + break; + case TXGBE_ATR_FLOW_TYPE_UDPV4: + fsp->flow_type = UDP_V4_FLOW; + break; + case TXGBE_ATR_FLOW_TYPE_SCTPV4: + fsp->flow_type = SCTP_V4_FLOW; + break; + case TXGBE_ATR_FLOW_TYPE_IPV4: + fsp->flow_type = IP_USER_FLOW; + fsp->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4; + fsp->h_u.usr_ip4_spec.proto = 0; + fsp->m_u.usr_ip4_spec.proto = 0; + break; + default: + return -EINVAL; + } + + fsp->h_u.tcp_ip4_spec.psrc = rule->filter.formatted.src_port; + fsp->m_u.tcp_ip4_spec.psrc = mask->formatted.src_port; + fsp->h_u.tcp_ip4_spec.pdst = rule->filter.formatted.dst_port; + fsp->m_u.tcp_ip4_spec.pdst = mask->formatted.dst_port; + fsp->h_u.tcp_ip4_spec.ip4src = rule->filter.formatted.src_ip[0]; + fsp->m_u.tcp_ip4_spec.ip4src = mask->formatted.src_ip[0]; + fsp->h_u.tcp_ip4_spec.ip4dst = rule->filter.formatted.dst_ip[0]; + fsp->m_u.tcp_ip4_spec.ip4dst = mask->formatted.dst_ip[0]; + fsp->h_ext.vlan_etype = rule->filter.formatted.flex_bytes; + fsp->m_ext.vlan_etype = mask->formatted.flex_bytes; + fsp->h_ext.data[1] = htonl(rule->filter.formatted.vm_pool); + fsp->m_ext.data[1] = htonl(mask->formatted.vm_pool); + fsp->flow_type |= FLOW_EXT; + + /* record action */ + if (rule->action == TXGBE_RDB_FDIR_DROP_QUEUE) + fsp->ring_cookie = RX_CLS_FLOW_DISC; + else + fsp->ring_cookie = rule->action; + + return 0; +} + +static int txgbe_get_ethtool_fdir_all(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *cmd, + u32 *rule_locs) +{ + struct hlist_node *node; + struct txgbe_fdir_filter *rule; + int cnt = 0; + + /* report total rule count */ + cmd->data = (1024 << adapter->fdir_pballoc) - 2; + + hlist_for_each_entry_safe(rule, node, + &adapter->fdir_filter_list, fdir_node) { + if (cnt == cmd->rule_cnt) + return -EMSGSIZE; + rule_locs[cnt] = rule->sw_idx; + cnt++; + } + + cmd->rule_cnt = cnt; + + return 0; +} + +static int txgbe_get_rss_hash_opts(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *cmd) +{ + cmd->data = 0; + + /* Report default options for RSS on txgbe */ + switch (cmd->flow_type) { + case TCP_V4_FLOW: + cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3; + /* fall through */ + case UDP_V4_FLOW: + if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP) + cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3; + /* fall through */ + case SCTP_V4_FLOW: + case AH_ESP_V4_FLOW: + case AH_V4_FLOW: + case ESP_V4_FLOW: + case IPV4_FLOW: + cmd->data |= RXH_IP_SRC | RXH_IP_DST; + break; + case TCP_V6_FLOW: + cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3; + /* fall through */ + case UDP_V6_FLOW: + if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP) + cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3; + /* fall through */ + case SCTP_V6_FLOW: + case AH_ESP_V6_FLOW: + case AH_V6_FLOW: + case ESP_V6_FLOW: + case IPV6_FLOW: + cmd->data |= RXH_IP_SRC | RXH_IP_DST; + break; + default: + return -EINVAL; + } + + return 0; +} + +static int txgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd, + u32 *rule_locs) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + int ret = -EOPNOTSUPP; + + switch (cmd->cmd) { + case ETHTOOL_GRXRINGS: + cmd->data = adapter->num_rx_queues; + ret = 0; + break; + case ETHTOOL_GRXCLSRLCNT: + cmd->rule_cnt = adapter->fdir_filter_count; + ret = 0; + break; + case ETHTOOL_GRXCLSRULE: + ret = txgbe_get_ethtool_fdir_entry(adapter, cmd); + break; + case ETHTOOL_GRXCLSRLALL: + ret = txgbe_get_ethtool_fdir_all(adapter, cmd, + (u32 *)rule_locs); + break; + case ETHTOOL_GRXFH: + ret = txgbe_get_rss_hash_opts(adapter, cmd); + break; + default: + break; + } + + return ret; +} + +static int txgbe_update_ethtool_fdir_entry(struct txgbe_adapter *adapter, + struct txgbe_fdir_filter *input, + u16 sw_idx) +{ + struct txgbe_hw *hw = &adapter->hw; + struct hlist_node *node, *parent; + struct txgbe_fdir_filter *rule; + bool deleted = false; + s32 err; + + parent = NULL; + rule = NULL; + + hlist_for_each_entry_safe(rule, node, + &adapter->fdir_filter_list, fdir_node) { + /* hash found, or no matching entry */ + if (rule->sw_idx >= sw_idx) + break; + parent = node; + } + + /* if there is an old rule occupying our place remove it */ + if (rule && (rule->sw_idx == sw_idx)) { + /* hardware filters are only configured when interface is up, + * and we should not issue filter commands while the interface + * is down + */ + if (netif_running(adapter->netdev) && + (!input || (rule->filter.formatted.bkt_hash != + input->filter.formatted.bkt_hash))) { + err = txgbe_fdir_erase_perfect_filter(hw, + &rule->filter, + sw_idx); + if (err) + return -EINVAL; + } + + hlist_del(&rule->fdir_node); + kfree(rule); + adapter->fdir_filter_count--; + deleted = true; + } + + /* If we weren't given an input, then this was a request to delete a + * filter. We should return -EINVAL if the filter wasn't found, but + * return 0 if the rule was successfully deleted. + */ + if (!input) + return deleted ? 0 : -EINVAL; + + /* initialize node and set software index */ + INIT_HLIST_NODE(&input->fdir_node); + + /* add filter to the list */ + if (parent) + hlist_add_behind(&input->fdir_node, parent); + else + hlist_add_head(&input->fdir_node, + &adapter->fdir_filter_list); + + /* update counts */ + adapter->fdir_filter_count++; + + return 0; +} + +static int txgbe_flowspec_to_flow_type(struct ethtool_rx_flow_spec *fsp, + u8 *flow_type) +{ + switch (fsp->flow_type & ~FLOW_EXT) { + case TCP_V4_FLOW: + *flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4; + break; + case UDP_V4_FLOW: + *flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4; + break; + case SCTP_V4_FLOW: + *flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4; + break; + case IP_USER_FLOW: + switch (fsp->h_u.usr_ip4_spec.proto) { + case IPPROTO_TCP: + *flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4; + break; + case IPPROTO_UDP: + *flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4; + break; + case IPPROTO_SCTP: + *flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4; + break; + case 0: + if (!fsp->m_u.usr_ip4_spec.proto) { + *flow_type = TXGBE_ATR_FLOW_TYPE_IPV4; + break; + } + /* fall through */ + default: + return 0; + } + break; + default: + return 0; + } + + return 1; +} + +static int txgbe_add_ethtool_fdir_entry(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *cmd) +{ + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_fdir_filter *input; + union txgbe_atr_input mask; + int err; + u16 ptype = 0; + + if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)) + return -EOPNOTSUPP; + + /* + * Don't allow programming if the action is a queue greater than + * the number of online Rx queues. + */ + if ((fsp->ring_cookie != RX_CLS_FLOW_DISC) && + (fsp->ring_cookie >= adapter->num_rx_queues)) + return -EINVAL; + + /* Don't allow indexes to exist outside of available space */ + if (fsp->location >= ((1024 << adapter->fdir_pballoc) - 2)) { + e_err(drv, "Location out of range\n"); + return -EINVAL; + } + + input = kzalloc(sizeof(*input), GFP_ATOMIC); + if (!input) + return -ENOMEM; + + memset(&mask, 0, sizeof(union txgbe_atr_input)); + + /* set SW index */ + input->sw_idx = fsp->location; + + /* record flow type */ + if (!txgbe_flowspec_to_flow_type(fsp, + &input->filter.formatted.flow_type)) { + e_err(drv, "Unrecognized flow type\n"); + goto err_out; + } + + mask.formatted.flow_type = TXGBE_ATR_L4TYPE_IPV6_MASK | + TXGBE_ATR_L4TYPE_MASK; + + if (input->filter.formatted.flow_type == TXGBE_ATR_FLOW_TYPE_IPV4) + mask.formatted.flow_type &= TXGBE_ATR_L4TYPE_IPV6_MASK; + + /* Copy input into formatted structures */ + input->filter.formatted.src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4src; + mask.formatted.src_ip[0] = fsp->m_u.tcp_ip4_spec.ip4src; + input->filter.formatted.dst_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst; + mask.formatted.dst_ip[0] = fsp->m_u.tcp_ip4_spec.ip4dst; + input->filter.formatted.src_port = fsp->h_u.tcp_ip4_spec.psrc; + mask.formatted.src_port = fsp->m_u.tcp_ip4_spec.psrc; + input->filter.formatted.dst_port = fsp->h_u.tcp_ip4_spec.pdst; + mask.formatted.dst_port = fsp->m_u.tcp_ip4_spec.pdst; + + if (fsp->flow_type & FLOW_EXT) { + input->filter.formatted.vm_pool = + (unsigned char)ntohl(fsp->h_ext.data[1]); + mask.formatted.vm_pool = + (unsigned char)ntohl(fsp->m_ext.data[1]); + input->filter.formatted.flex_bytes = + fsp->h_ext.vlan_etype; + mask.formatted.flex_bytes = fsp->m_ext.vlan_etype; +#if 0 + /* need fix */ + input->filter.formatted.tunnel_type = + (unsigned char)ntohl(fsp->h_ext.data[0]); + mask.formatted.tunnel_type = + (unsigned char)ntohl(fsp->m_ext.data[0]); +#endif + } + + switch (input->filter.formatted.flow_type) { + case TXGBE_ATR_FLOW_TYPE_TCPV4: + ptype = TXGBE_PTYPE_L2_IPV4_TCP; + break; + case TXGBE_ATR_FLOW_TYPE_UDPV4: + ptype = TXGBE_PTYPE_L2_IPV4_UDP; + break; + case TXGBE_ATR_FLOW_TYPE_SCTPV4: + ptype = TXGBE_PTYPE_L2_IPV4_SCTP; + break; + case TXGBE_ATR_FLOW_TYPE_IPV4: + ptype = TXGBE_PTYPE_L2_IPV4; + break; + case TXGBE_ATR_FLOW_TYPE_TCPV6: + ptype = TXGBE_PTYPE_L2_IPV6_TCP; + break; + case TXGBE_ATR_FLOW_TYPE_UDPV6: + ptype = TXGBE_PTYPE_L2_IPV6_UDP; + break; + case TXGBE_ATR_FLOW_TYPE_SCTPV6: + ptype = TXGBE_PTYPE_L2_IPV6_SCTP; + break; + case TXGBE_ATR_FLOW_TYPE_IPV6: + ptype = TXGBE_PTYPE_L2_IPV6; + break; + default: + break; + } + + input->filter.formatted.vlan_id = htons(ptype); + if (mask.formatted.flow_type & TXGBE_ATR_L4TYPE_MASK) + mask.formatted.vlan_id = 0xFFFF; + else + mask.formatted.vlan_id = htons(0xFFF8); + + /* determine if we need to drop or route the packet */ + if (fsp->ring_cookie == RX_CLS_FLOW_DISC) + input->action = TXGBE_RDB_FDIR_DROP_QUEUE; + else + input->action = fsp->ring_cookie; + + spin_lock(&adapter->fdir_perfect_lock); + + if (hlist_empty(&adapter->fdir_filter_list)) { + /* save mask and program input mask into HW */ + memcpy(&adapter->fdir_mask, &mask, sizeof(mask)); + err = txgbe_fdir_set_input_mask(hw, &mask, + adapter->cloud_mode); + if (err) { + e_err(drv, "Error writing mask\n"); + goto err_out_w_lock; + } + } else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) { + e_err(drv, "Hardware only supports one mask per port. To change" + "the mask you must first delete all the rules.\n"); + goto err_out_w_lock; + } + + /* apply mask and compute/store hash */ + txgbe_atr_compute_perfect_hash(&input->filter, &mask); + + /* only program filters to hardware if the net device is running, as + * we store the filters in the Rx buffer which is not allocated when + * the device is down + */ + if (netif_running(adapter->netdev)) { + err = txgbe_fdir_write_perfect_filter(hw, + &input->filter, input->sw_idx, + (input->action == TXGBE_RDB_FDIR_DROP_QUEUE) ? + TXGBE_RDB_FDIR_DROP_QUEUE : + adapter->rx_ring[input->action]->reg_idx, + adapter->cloud_mode); + if (err) + goto err_out_w_lock; + } + + txgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx); + + spin_unlock(&adapter->fdir_perfect_lock); + + return err; +err_out_w_lock: + spin_unlock(&adapter->fdir_perfect_lock); +err_out: + kfree(input); + return -EINVAL; +} + +static int txgbe_del_ethtool_fdir_entry(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *cmd) +{ + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + int err; + + spin_lock(&adapter->fdir_perfect_lock); + err = txgbe_update_ethtool_fdir_entry(adapter, NULL, fsp->location); + spin_unlock(&adapter->fdir_perfect_lock); + + return err; +} + +#define UDP_RSS_FLAGS (TXGBE_FLAG2_RSS_FIELD_IPV4_UDP | \ + TXGBE_FLAG2_RSS_FIELD_IPV6_UDP) +static int txgbe_set_rss_hash_opt(struct txgbe_adapter *adapter, + struct ethtool_rxnfc *nfc) +{ + u32 flags2 = adapter->flags2; + + /* + * RSS does not support anything other than hashing + * to queues on src and dst IPs and ports + */ + if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) + return -EINVAL; + + switch (nfc->flow_type) { + case TCP_V4_FLOW: + case TCP_V6_FLOW: + if (!(nfc->data & RXH_IP_SRC) || + !(nfc->data & RXH_IP_DST) || + !(nfc->data & RXH_L4_B_0_1) || + !(nfc->data & RXH_L4_B_2_3)) + return -EINVAL; + break; + case UDP_V4_FLOW: + if (!(nfc->data & RXH_IP_SRC) || + !(nfc->data & RXH_IP_DST)) + return -EINVAL; + switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) { + case 0: + flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV4_UDP; + break; + case (RXH_L4_B_0_1 | RXH_L4_B_2_3): + flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV4_UDP; + break; + default: + return -EINVAL; + } + break; + case UDP_V6_FLOW: + if (!(nfc->data & RXH_IP_SRC) || + !(nfc->data & RXH_IP_DST)) + return -EINVAL; + switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) { + case 0: + flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV6_UDP; + break; + case (RXH_L4_B_0_1 | RXH_L4_B_2_3): + flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV6_UDP; + break; + default: + return -EINVAL; + } + break; + case AH_ESP_V4_FLOW: + case AH_V4_FLOW: + case ESP_V4_FLOW: + case SCTP_V4_FLOW: + case AH_ESP_V6_FLOW: + case AH_V6_FLOW: + case ESP_V6_FLOW: + case SCTP_V6_FLOW: + if (!(nfc->data & RXH_IP_SRC) || + !(nfc->data & RXH_IP_DST) || + (nfc->data & RXH_L4_B_0_1) || + (nfc->data & RXH_L4_B_2_3)) + return -EINVAL; + break; + default: + return -EINVAL; + } + + /* if we changed something we need to update flags */ + if (flags2 != adapter->flags2) { + struct txgbe_hw *hw = &adapter->hw; + u32 mrqc; + + mrqc = rd32(hw, TXGBE_RDB_RA_CTL); + + if ((flags2 & UDP_RSS_FLAGS) && + !(adapter->flags2 & UDP_RSS_FLAGS)) + e_warn(drv, "enabling UDP RSS: fragmented packets" + " may arrive out of order to the stack above\n"); + + adapter->flags2 = flags2; + + /* Perform hash on these packet types */ + mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4 + | TXGBE_RDB_RA_CTL_RSS_IPV4_TCP + | TXGBE_RDB_RA_CTL_RSS_IPV6 + | TXGBE_RDB_RA_CTL_RSS_IPV6_TCP; + + mrqc &= ~(TXGBE_RDB_RA_CTL_RSS_IPV4_UDP | + TXGBE_RDB_RA_CTL_RSS_IPV6_UDP); + + if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP) + mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP; + + if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP) + mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP; + + wr32(hw, TXGBE_RDB_RA_CTL, mrqc); + } + + return 0; +} + +static int txgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + int ret = -EOPNOTSUPP; + + switch (cmd->cmd) { + case ETHTOOL_SRXCLSRLINS: + ret = txgbe_add_ethtool_fdir_entry(adapter, cmd); + break; + case ETHTOOL_SRXCLSRLDEL: + ret = txgbe_del_ethtool_fdir_entry(adapter, cmd); + break; + case ETHTOOL_SRXFH: + ret = txgbe_set_rss_hash_opt(adapter, cmd); + break; + default: + break; + } + + return ret; +} + +static int txgbe_rss_indir_tbl_max(struct txgbe_adapter *adapter) +{ + return 64; +} + + +static u32 txgbe_get_rxfh_key_size(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + return sizeof(adapter->rss_key); +} + +static u32 txgbe_rss_indir_size(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + return txgbe_rss_indir_tbl_entries(adapter); +} + +static void txgbe_get_reta(struct txgbe_adapter *adapter, u32 *indir) +{ + int i, reta_size = txgbe_rss_indir_tbl_entries(adapter); + + for (i = 0; i < reta_size; i++) + indir[i] = adapter->rss_indir_tbl[i]; +} + +static int txgbe_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, + u8 *hfunc) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + if (indir) + txgbe_get_reta(adapter, indir); + + if (key) + memcpy(key, adapter->rss_key, txgbe_get_rxfh_key_size(netdev)); + + return 0; +} + +static int txgbe_set_rxfh(struct net_device *netdev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + int i; + u32 reta_entries = txgbe_rss_indir_tbl_entries(adapter); + + if (hfunc) + return -EINVAL; + + /* Fill out the redirection table */ + if (indir) { + int max_queues = min_t(int, adapter->num_rx_queues, + txgbe_rss_indir_tbl_max(adapter)); + + /*Allow at least 2 queues w/ SR-IOV.*/ + if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) && + (max_queues < 2)) + max_queues = 2; + + /* Verify user input. */ + for (i = 0; i < reta_entries; i++) + if (indir[i] >= max_queues) + return -EINVAL; + + for (i = 0; i < reta_entries; i++) + adapter->rss_indir_tbl[i] = indir[i]; + } + + /* Fill out the rss hash key */ + if (key) + memcpy(adapter->rss_key, key, txgbe_get_rxfh_key_size(netdev)); + + txgbe_store_reta(adapter); + + return 0; +} + +static int txgbe_get_ts_info(struct net_device *dev, + struct ethtool_ts_info *info) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + + /* we always support timestamping disabled */ + info->rx_filters = 1 << HWTSTAMP_FILTER_NONE; + + info->so_timestamping = + SOF_TIMESTAMPING_TX_SOFTWARE | + SOF_TIMESTAMPING_RX_SOFTWARE | + SOF_TIMESTAMPING_SOFTWARE | + SOF_TIMESTAMPING_TX_HARDWARE | + SOF_TIMESTAMPING_RX_HARDWARE | + SOF_TIMESTAMPING_RAW_HARDWARE; + + if (adapter->ptp_clock) + info->phc_index = ptp_clock_index(adapter->ptp_clock); + else + info->phc_index = -1; + + info->tx_types = + (1 << HWTSTAMP_TX_OFF) | + (1 << HWTSTAMP_TX_ON); + + info->rx_filters |= + (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) | + (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) | + (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) | + (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) | + (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | + (1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) | + (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) | + (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) | + (1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) | + (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) | + (1 << HWTSTAMP_FILTER_PTP_V2_EVENT); + + return 0; +} + +static unsigned int txgbe_max_channels(struct txgbe_adapter *adapter) +{ + unsigned int max_combined; + u8 tcs = netdev_get_num_tc(adapter->netdev); + + if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) { + /* We only support one q_vector without MSI-X */ + max_combined = 1; + } else if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) { + /* SR-IOV currently only allows one queue on the PF */ + max_combined = 1; + } else if (tcs > 1) { + /* For DCB report channels per traffic class */ + if (tcs > 4) { + /* 8 TC w/ 8 queues per TC */ + max_combined = 8; + } else { + /* 4 TC w/ 16 queues per TC */ + max_combined = 16; + } + } else if (adapter->atr_sample_rate) { + /* support up to 64 queues with ATR */ + max_combined = TXGBE_MAX_FDIR_INDICES; + } else { + /* support up to max allowed queues with RSS */ + max_combined = txgbe_max_rss_indices(adapter); + } + + return max_combined; +} + +static void txgbe_get_channels(struct net_device *dev, + struct ethtool_channels *ch) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + + /* report maximum channels */ + ch->max_combined = txgbe_max_channels(adapter); + + /* report info for other vector */ + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) { + ch->max_other = NON_Q_VECTORS; + ch->other_count = NON_Q_VECTORS; + } + + /* record RSS queues */ + ch->combined_count = adapter->ring_feature[RING_F_RSS].indices; + + /* nothing else to report if RSS is disabled */ + if (ch->combined_count == 1) + return; + + /* we do not support ATR queueing if SR-IOV is enabled */ + if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) + return; + + /* same thing goes for being DCB enabled */ + if (netdev_get_num_tc(dev) > 1) + return; + + /* if ATR is disabled we can exit */ + if (!adapter->atr_sample_rate) + return; + + /* report flow director queues as maximum channels */ + ch->combined_count = adapter->ring_feature[RING_F_FDIR].indices; +} + +static int txgbe_set_channels(struct net_device *dev, + struct ethtool_channels *ch) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + unsigned int count = ch->combined_count; + u8 max_rss_indices = txgbe_max_rss_indices(adapter); + + /* verify they are not requesting separate vectors */ + if (!count || ch->rx_count || ch->tx_count) + return -EINVAL; + + /* verify other_count has not changed */ + if (ch->other_count != NON_Q_VECTORS) + return -EINVAL; + + /* verify the number of channels does not exceed hardware limits */ + if (count > txgbe_max_channels(adapter)) + return -EINVAL; + + /* update feature limits from largest to smallest supported values */ + adapter->ring_feature[RING_F_FDIR].limit = count; + + /* cap RSS limit */ + if (count > max_rss_indices) + count = max_rss_indices; + adapter->ring_feature[RING_F_RSS].limit = count; + + /* use setup TC to update any traffic class queue mapping */ + return txgbe_setup_tc(dev, netdev_get_num_tc(dev)); +} + +static int txgbe_get_module_info(struct net_device *dev, + struct ethtool_modinfo *modinfo) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + struct txgbe_hw *hw = &adapter->hw; + u32 status; + u8 sff8472_rev, addr_mode; + bool page_swap = false; + + /* Check whether we support SFF-8472 or not */ + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_SFF_8472_COMP, + &sff8472_rev); + if (status != 0) + return -EIO; + + /* addressing mode is not supported */ + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_SFF_8472_SWAP, + &addr_mode); + if (status != 0) + return -EIO; + + if (addr_mode & TXGBE_SFF_ADDRESSING_MODE) { + e_err(drv, "Address change required to access page 0xA2, " + "but not supported. Please report the module type to the " + "driver maintainers.\n"); + page_swap = true; + } + + if (sff8472_rev == TXGBE_SFF_SFF_8472_UNSUP || page_swap) { + /* We have a SFP, but it does not support SFF-8472 */ + modinfo->type = ETH_MODULE_SFF_8079; + modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; + } else { + /* We have a SFP which supports a revision of SFF-8472. */ + modinfo->type = ETH_MODULE_SFF_8472; + modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN; + } + + return 0; +} + +static int txgbe_get_module_eeprom(struct net_device *dev, + struct ethtool_eeprom *ee, + u8 *data) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + struct txgbe_hw *hw = &adapter->hw; + u32 status = TXGBE_ERR_PHY_ADDR_INVALID; + u8 databyte = 0xFF; + int i = 0; + + if (ee->len == 0) + return -EINVAL; + + for (i = ee->offset; i < ee->offset + ee->len; i++) { + /* I2C reads can take long time */ + if (test_bit(__TXGBE_IN_SFP_INIT, &adapter->state)) + return -EBUSY; + + if (i < ETH_MODULE_SFF_8079_LEN) + status = TCALL(hw, phy.ops.read_i2c_eeprom, i, + &databyte); + else + status = TCALL(hw, phy.ops.read_i2c_sff8472, i, + &databyte); + + if (status != 0) + return -EIO; + + data[i - ee->offset] = databyte; + } + + return 0; +} + +static int txgbe_get_eee(struct net_device *netdev, struct ethtool_eee *edata) +{ + return 0; +} + +static int txgbe_set_eee(struct net_device *netdev, struct ethtool_eee *edata) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + struct ethtool_eee eee_data; + s32 ret_val; + + if (!(hw->mac.ops.setup_eee && + (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE))) + return -EOPNOTSUPP; + + memset(&eee_data, 0, sizeof(struct ethtool_eee)); + + ret_val = txgbe_get_eee(netdev, &eee_data); + if (ret_val) + return ret_val; + + if (eee_data.eee_enabled && !edata->eee_enabled) { + if (eee_data.tx_lpi_enabled != edata->tx_lpi_enabled) { + e_dev_err("Setting EEE tx-lpi is not supported\n"); + return -EINVAL; + } + + if (eee_data.tx_lpi_timer != edata->tx_lpi_timer) { + e_dev_err("Setting EEE Tx LPI timer is not " + "supported\n"); + return -EINVAL; + } + + if (eee_data.advertised != edata->advertised) { + e_dev_err("Setting EEE advertised speeds is not " + "supported\n"); + return -EINVAL; + } + + } + + if (eee_data.eee_enabled != edata->eee_enabled) { + + if (edata->eee_enabled) + adapter->flags2 |= TXGBE_FLAG2_EEE_ENABLED; + else + adapter->flags2 &= ~TXGBE_FLAG2_EEE_ENABLED; + + /* reset link */ + if (netif_running(netdev)) + txgbe_reinit_locked(adapter); + else + txgbe_reset(adapter); + } + + return 0; +} + +static int txgbe_set_flash(struct net_device *netdev, struct ethtool_flash *ef) +{ + int ret; + const struct firmware *fw; + struct txgbe_adapter *adapter = netdev_priv(netdev); + + ret = request_firmware(&fw, ef->data, &netdev->dev); + if (ret < 0) + return ret; + + if (txgbe_mng_present(&adapter->hw)) { + ret = txgbe_upgrade_flash_hostif(&adapter->hw, ef->region, + fw->data, fw->size); + } else + ret = -EOPNOTSUPP; + + release_firmware(fw); + if (!ret) + dev_info(&netdev->dev, + "loaded firmware %s, reload txgbe driver\n", ef->data); + return ret; +} + +static struct ethtool_ops txgbe_ethtool_ops = { + .get_link_ksettings = txgbe_get_link_ksettings, + .set_link_ksettings = txgbe_set_link_ksettings, + .get_drvinfo = txgbe_get_drvinfo, + .get_regs_len = txgbe_get_regs_len, + .get_regs = txgbe_get_regs, + .get_wol = txgbe_get_wol, + .set_wol = txgbe_set_wol, + .nway_reset = txgbe_nway_reset, + .get_link = ethtool_op_get_link, + .get_eeprom_len = txgbe_get_eeprom_len, + .get_eeprom = txgbe_get_eeprom, + .set_eeprom = txgbe_set_eeprom, + .get_ringparam = txgbe_get_ringparam, + .set_ringparam = txgbe_set_ringparam, + .get_pauseparam = txgbe_get_pauseparam, + .set_pauseparam = txgbe_set_pauseparam, + .get_msglevel = txgbe_get_msglevel, + .set_msglevel = txgbe_set_msglevel, + .self_test = txgbe_diag_test, + .get_strings = txgbe_get_strings, + .set_phys_id = txgbe_set_phys_id, + .get_sset_count = txgbe_get_sset_count, + .get_ethtool_stats = txgbe_get_ethtool_stats, + .get_coalesce = txgbe_get_coalesce, + .set_coalesce = txgbe_set_coalesce, + .get_rxnfc = txgbe_get_rxnfc, + .set_rxnfc = txgbe_set_rxnfc, + .get_eee = txgbe_get_eee, + .set_eee = txgbe_set_eee, + .get_channels = txgbe_get_channels, + .set_channels = txgbe_set_channels, + .get_module_info = txgbe_get_module_info, + .get_module_eeprom = txgbe_get_module_eeprom, + .get_ts_info = txgbe_get_ts_info, + .get_rxfh_indir_size = txgbe_rss_indir_size, + .get_rxfh_key_size = txgbe_get_rxfh_key_size, + .get_rxfh = txgbe_get_rxfh, + .set_rxfh = txgbe_set_rxfh, + .flash_device = txgbe_set_flash, +}; + +void txgbe_set_ethtool_ops(struct net_device *netdev) +{ + netdev->ethtool_ops = &txgbe_ethtool_ops; +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c new file mode 100644 index 000000000000..17e366ebd6fe --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c @@ -0,0 +1,7072 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_82599.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#include "txgbe_type.h" +#include "txgbe_hw.h" +#include "txgbe_phy.h" +#include "txgbe.h" + + +#define TXGBE_SP_MAX_TX_QUEUES 128 +#define TXGBE_SP_MAX_RX_QUEUES 128 +#define TXGBE_SP_RAR_ENTRIES 128 +#define TXGBE_SP_MC_TBL_SIZE 128 +#define TXGBE_SP_VFT_TBL_SIZE 128 +#define TXGBE_SP_RX_PB_SIZE 512 + +STATIC s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw); +STATIC void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw); +STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr); +STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw, + u16 *san_mac_offset); + +STATIC s32 txgbe_setup_copper_link(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete); +s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed, + bool *link_up, bool link_up_wait_to_complete); + + +u32 rd32_ephy(struct txgbe_hw *hw, u32 addr) +{ + unsigned int portRegOffset; + u32 data; + + /* Set the LAN port indicator to portRegOffset[1] */ + /* 1st, write the regOffset to IDA_ADDR register */ + portRegOffset = TXGBE_ETHPHY_IDA_ADDR; + wr32(hw, portRegOffset, addr); + + /* 2nd, read the data from IDA_DATA register */ + portRegOffset = TXGBE_ETHPHY_IDA_DATA; + data = rd32(hw, portRegOffset); + return data; +} + + +u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr) +{ + unsigned int portRegOffset; + u32 data; + /* Set the LAN port indicator to portRegOffset[1] */ + /* 1st, write the regOffset to IDA_ADDR register */ + portRegOffset = TXGBE_XPCS_IDA_ADDR; + wr32(hw, portRegOffset, addr); + + /* 2nd, read the data from IDA_DATA register */ + portRegOffset = TXGBE_XPCS_IDA_DATA; + data = rd32(hw, portRegOffset); + + return data; +} + + +void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data) +{ + unsigned int portRegOffset; + + /* Set the LAN port indicator to portRegOffset[1] */ + /* 1st, write the regOffset to IDA_ADDR register */ + portRegOffset = TXGBE_ETHPHY_IDA_ADDR; + wr32(hw, portRegOffset, addr); + + /* 2nd, read the data from IDA_DATA register */ + portRegOffset = TXGBE_ETHPHY_IDA_DATA; + wr32(hw, portRegOffset, data); +} + +void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data) +{ + unsigned int portRegOffset; + + /* Set the LAN port indicator to portRegOffset[1] */ + /* 1st, write the regOffset to IDA_ADDR register */ + portRegOffset = TXGBE_XPCS_IDA_ADDR; + wr32(hw, portRegOffset, addr); + + /* 2nd, read the data from IDA_DATA register */ + portRegOffset = TXGBE_XPCS_IDA_DATA; + wr32(hw, portRegOffset, data); +} + +/** + * txgbe_get_pcie_msix_count - Gets MSI-X vector count + * @hw: pointer to hardware structure + * + * Read PCIe configuration space, and get the MSI-X vector count from + * the capabilities table. + **/ +u16 txgbe_get_pcie_msix_count(struct txgbe_hw *hw) +{ + u16 msix_count = 1; + u16 max_msix_count; + u32 pos; + + DEBUGFUNC("\n"); + + max_msix_count = TXGBE_MAX_MSIX_VECTORS_SAPPHIRE; + pos = pci_find_capability(((struct txgbe_adapter *)hw->back)->pdev, PCI_CAP_ID_MSIX); + if (!pos) + return msix_count; + pci_read_config_word(((struct txgbe_adapter *)hw->back)->pdev, + pos + PCI_MSIX_FLAGS, &msix_count); + + if (TXGBE_REMOVED(hw->hw_addr)) + msix_count = 0; + msix_count &= TXGBE_PCIE_MSIX_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > max_msix_count) + msix_count = max_msix_count; + + return msix_count; +} + +/** + * txgbe_init_hw - Generic hardware initialization + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting the hardware, filling the bus info + * structure and media type, clears all on chip counters, initializes receive + * address registers, multicast table, VLAN filter table, calls routine to set + * up link and flow control settings, and leaves transmit and receive units + * disabled and uninitialized + **/ +s32 txgbe_init_hw(struct txgbe_hw *hw) +{ + s32 status; + + DEBUGFUNC("\n"); + + /* Reset the hardware */ + status = TCALL(hw, mac.ops.reset_hw); + + if (status == 0) { + /* Start the HW */ + status = TCALL(hw, mac.ops.start_hw); + } + + return status; +} + + +/** + * txgbe_clear_hw_cntrs - Generic clear hardware counters + * @hw: pointer to hardware structure + * + * Clears all hardware statistics counters by reading them from the hardware + * Statistics counters are clear on read. + **/ +s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw) +{ + u16 i = 0; + + DEBUGFUNC("\n"); + + rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW); + for (i = 0; i < 8; i++) + rd32(hw, TXGBE_RDB_MPCNT(i)); + + rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW); + rd32(hw, TXGBE_RDB_LXONTXC); + rd32(hw, TXGBE_RDB_LXOFFTXC); + rd32(hw, TXGBE_MAC_LXONRXC); + rd32(hw, TXGBE_MAC_LXOFFRXC); + + for (i = 0; i < 8; i++) { + rd32(hw, TXGBE_RDB_PXONTXC(i)); + rd32(hw, TXGBE_RDB_PXOFFTXC(i)); + rd32(hw, TXGBE_MAC_PXONRXC(i)); + wr32m(hw, TXGBE_MMC_CONTROL, TXGBE_MMC_CONTROL_UP, i<<16); + rd32(hw, TXGBE_MAC_PXOFFRXC); + } + for (i = 0; i < 8; i++) + rd32(hw, TXGBE_RDB_PXON2OFFCNT(i)); + for (i = 0; i < 128; i++) { + wr32(hw, TXGBE_PX_MPRC(i), 0); + } + + rd32(hw, TXGBE_PX_GPRC); + rd32(hw, TXGBE_PX_GPTC); + rd32(hw, TXGBE_PX_GORC_MSB); + rd32(hw, TXGBE_PX_GOTC_MSB); + + rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW); + rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD); + rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD); + rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW); + rd32(hw, TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW); + rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW); + rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW); + rd32(hw, TXGBE_RDM_DRP_PKT); + return 0; +} + +/** + * txgbe_device_supports_autoneg_fc - Check if device supports autonegotiation + * of flow control + * @hw: pointer to hardware structure + * + * This function returns true if the device supports flow control + * autonegotiation, and false if it does not. + * + **/ +bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw) +{ + bool supported = false; + u32 speed; + bool link_up; + u8 device_type = hw->subsystem_id & 0xF0; + + DEBUGFUNC("\n"); + + switch (hw->phy.media_type) { + case txgbe_media_type_fiber: + TCALL(hw, mac.ops.check_link, &speed, &link_up, false); + /* if link is down, assume supported */ + if (link_up) + supported = speed == TXGBE_LINK_SPEED_1GB_FULL ? + true : false; + else + supported = true; + break; + case txgbe_media_type_backplane: + supported = (device_type != TXGBE_ID_MAC_XAUI && + device_type != TXGBE_ID_MAC_SGMII); + break; + case txgbe_media_type_copper: + /* only some copper devices support flow control autoneg */ + supported = true; + break; + default: + break; + } + + ERROR_REPORT2(TXGBE_ERROR_UNSUPPORTED, + "Device %x does not support flow control autoneg", + hw->device_id); + return supported; +} + +/** + * txgbe_setup_fc - Set up flow control + * @hw: pointer to hardware structure + * + * Called at init time to set up flow control. + **/ +s32 txgbe_setup_fc(struct txgbe_hw *hw) +{ + s32 ret_val = 0; + u32 pcap = 0; + u32 value = 0; + u32 pcap_backplane = 0; + + DEBUGFUNC("\n"); + + /* Validate the requested mode */ + if (hw->fc.strict_ieee && hw->fc.requested_mode == txgbe_fc_rx_pause) { + ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED, + "txgbe_fc_rx_pause not valid in strict IEEE mode\n"); + ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + + /* + * 10gig parts do not have a word in the EEPROM to determine the + * default flow control setting, so we explicitly set it to full. + */ + if (hw->fc.requested_mode == txgbe_fc_default) + hw->fc.requested_mode = txgbe_fc_full; + + /* + * Set up the 1G and 10G flow control advertisement registers so the + * HW will be able to do fc autoneg once the cable is plugged in. If + * we link at 10G, the 1G advertisement is harmless and vice versa. + */ + + /* + * The possible values of fc.requested_mode are: + * 0: Flow control is completely disabled + * 1: Rx flow control is enabled (we can receive pause frames, + * but not send pause frames). + * 2: Tx flow control is enabled (we can send pause frames but + * we do not support receiving pause frames). + * 3: Both Rx and Tx flow control (symmetric) are enabled. + * other: Invalid. + */ + switch (hw->fc.requested_mode) { + case txgbe_fc_none: + /* Flow control completely disabled by software override. */ + break; + case txgbe_fc_tx_pause: + /* + * Tx Flow control is enabled, and Rx Flow control is + * disabled by software override. + */ + pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM; + pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM; + break; + case txgbe_fc_rx_pause: + /* + * Rx Flow control is enabled and Tx Flow control is + * disabled by software override. Since there really + * isn't a way to advertise that we are capable of RX + * Pause ONLY, we will advertise that we support both + * symmetric and asymmetric Rx PAUSE, as such we fall + * through to the fc_full statement. Later, we will + * disable the adapter's ability to send PAUSE frames. + */ + case txgbe_fc_full: + /* Flow control (both Rx and Tx) is enabled by SW override. */ + pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM | + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM; + pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM | + TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM; + break; + default: + ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, + "Flow control param set incorrectly\n"); + ret_val = TXGBE_ERR_CONFIG; + goto out; + break; + } + + /* + * Enable auto-negotiation between the MAC & PHY; + * the MAC will advertise clause 37 flow control. + */ + value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV); + value = (value & ~(TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM | + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM)) | pcap; + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV, value); + + /* + * AUTOC restart handles negotiation of 1G and 10G on backplane + * and copper. + */ + if (hw->phy.media_type == txgbe_media_type_backplane) { + value = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1); + value = (value & ~(TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM | + TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM)) | + pcap_backplane; + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1, value); + + } else if ((hw->phy.media_type == txgbe_media_type_copper) && + (txgbe_device_supports_autoneg_fc(hw))) { + ret_val = txgbe_set_phy_pause_advertisement(hw, pcap_backplane); + } +out: + return ret_val; +} + +/** + * txgbe_read_pba_string - Reads part number string from EEPROM + * @hw: pointer to hardware structure + * @pba_num: stores the part number string from the EEPROM + * @pba_num_size: part number string buffer length + * + * Reads the part number string from the EEPROM. + **/ +s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, + u32 pba_num_size) +{ + s32 ret_val; + u16 data; + u16 pba_ptr; + u16 offset; + u16 length; + + DEBUGFUNC("\n"); + + if (pba_num == NULL) { + DEBUGOUT("PBA string buffer was null\n"); + return TXGBE_ERR_INVALID_ARGUMENT; + } + + ret_val = TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR, + &data); + if (ret_val) { + DEBUGOUT("NVM Read Error\n"); + return ret_val; + } + + ret_val = TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR, + &pba_ptr); + if (ret_val) { + DEBUGOUT("NVM Read Error\n"); + return ret_val; + } + + /* + * if data is not ptr guard the PBA must be in legacy format which + * means pba_ptr is actually our second data word for the PBA number + * and we can decode it into an ascii string + */ + if (data != TXGBE_PBANUM_PTR_GUARD) { + DEBUGOUT("NVM PBA number is not stored as string\n"); + + /* we will need 11 characters to store the PBA */ + if (pba_num_size < 11) { + DEBUGOUT("PBA string buffer too small\n"); + return TXGBE_ERR_NO_SPACE; + } + + /* extract hex string from data and pba_ptr */ + pba_num[0] = (data >> 12) & 0xF; + pba_num[1] = (data >> 8) & 0xF; + pba_num[2] = (data >> 4) & 0xF; + pba_num[3] = data & 0xF; + pba_num[4] = (pba_ptr >> 12) & 0xF; + pba_num[5] = (pba_ptr >> 8) & 0xF; + pba_num[6] = '-'; + pba_num[7] = 0; + pba_num[8] = (pba_ptr >> 4) & 0xF; + pba_num[9] = pba_ptr & 0xF; + + /* put a null character on the end of our string */ + pba_num[10] = '\0'; + + /* switch all the data but the '-' to hex char */ + for (offset = 0; offset < 10; offset++) { + if (pba_num[offset] < 0xA) + pba_num[offset] += '0'; + else if (pba_num[offset] < 0x10) + pba_num[offset] += 'A' - 0xA; + } + + return 0; + } + + ret_val = TCALL(hw, eeprom.ops.read, pba_ptr, &length); + if (ret_val) { + DEBUGOUT("NVM Read Error\n"); + return ret_val; + } + + if (length == 0xFFFF || length == 0) { + DEBUGOUT("NVM PBA number section invalid length\n"); + return TXGBE_ERR_PBA_SECTION; + } + + /* check if pba_num buffer is big enough */ + if (pba_num_size < (((u32)length * 2) - 1)) { + DEBUGOUT("PBA string buffer too small\n"); + return TXGBE_ERR_NO_SPACE; + } + + /* trim pba length from start of string */ + pba_ptr++; + length--; + + for (offset = 0; offset < length; offset++) { + ret_val = TCALL(hw, eeprom.ops.read, pba_ptr + offset, &data); + if (ret_val) { + DEBUGOUT("NVM Read Error\n"); + return ret_val; + } + pba_num[offset * 2] = (u8)(data >> 8); + pba_num[(offset * 2) + 1] = (u8)(data & 0xFF); + } + pba_num[offset * 2] = '\0'; + + return 0; +} + +/** + * txgbe_get_mac_addr - Generic get MAC address + * @hw: pointer to hardware structure + * @mac_addr: Adapter MAC address + * + * Reads the adapter's MAC address from first Receive Address Register (RAR0) + * A reset of the adapter must be performed prior to calling this function + * in order for the MAC address to have been loaded from the EEPROM into RAR0 + **/ +s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr) +{ + u32 rar_high; + u32 rar_low; + u16 i; + + DEBUGFUNC("\n"); + + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 0); + rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H); + rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L); + + for (i = 0; i < 2; i++) + mac_addr[i] = (u8)(rar_high >> (1 - i) * 8); + + for (i = 0; i < 4; i++) + mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8); + + return 0; +} + +/** + * txgbe_set_pci_config_data - Generic store PCI bus info + * @hw: pointer to hardware structure + * @link_status: the link status returned by the PCI config space + * + * Stores the PCI bus info (speed, width, type) within the txgbe_hw structure + **/ +void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status) +{ + if (hw->bus.type == txgbe_bus_type_unknown) + hw->bus.type = txgbe_bus_type_pci_express; + + switch (link_status & TXGBE_PCI_LINK_WIDTH) { + case TXGBE_PCI_LINK_WIDTH_1: + hw->bus.width = txgbe_bus_width_pcie_x1; + break; + case TXGBE_PCI_LINK_WIDTH_2: + hw->bus.width = txgbe_bus_width_pcie_x2; + break; + case TXGBE_PCI_LINK_WIDTH_4: + hw->bus.width = txgbe_bus_width_pcie_x4; + break; + case TXGBE_PCI_LINK_WIDTH_8: + hw->bus.width = txgbe_bus_width_pcie_x8; + break; + default: + hw->bus.width = txgbe_bus_width_unknown; + break; + } + + switch (link_status & TXGBE_PCI_LINK_SPEED) { + case TXGBE_PCI_LINK_SPEED_2500: + hw->bus.speed = txgbe_bus_speed_2500; + break; + case TXGBE_PCI_LINK_SPEED_5000: + hw->bus.speed = txgbe_bus_speed_5000; + break; + case TXGBE_PCI_LINK_SPEED_8000: + hw->bus.speed = txgbe_bus_speed_8000; + break; + default: + hw->bus.speed = txgbe_bus_speed_unknown; + break; + } + +} + +/** + * txgbe_get_bus_info - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the txgbe_hw structure. + **/ +s32 txgbe_get_bus_info(struct txgbe_hw *hw) +{ + u16 link_status; + + DEBUGFUNC("\n"); + + /* Get the negotiated link width and speed from PCI config space */ + link_status = txgbe_read_pci_cfg_word(hw, TXGBE_PCI_LINK_STATUS); + + txgbe_set_pci_config_data(hw, link_status); + + return 0; +} + +/** + * txgbe_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices + * @hw: pointer to the HW structure + * + * Determines the LAN function id by reading memory-mapped registers + * and swaps the port value if requested. + **/ +void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw) +{ + struct txgbe_bus_info *bus = &hw->bus; + u32 reg; + + DEBUGFUNC("\n"); + + reg = rd32(hw, TXGBE_CFG_PORT_ST); + bus->lan_id = TXGBE_CFG_PORT_ST_LAN_ID(reg); + + /* check for a port swap */ + reg = rd32(hw, TXGBE_MIS_PWR); + if (TXGBE_MIS_PWR_LAN_ID_1 == TXGBE_MIS_PWR_LAN_ID(reg)) + bus->func = 0; + else + bus->func = bus->lan_id; +} + +/** + * txgbe_stop_adapter - Generic stop Tx/Rx units + * @hw: pointer to hardware structure + * + * Sets the adapter_stopped flag within txgbe_hw struct. Clears interrupts, + * disables transmit and receive units. The adapter_stopped flag is used by + * the shared code and drivers to determine if the adapter is in a stopped + * state and should not touch the hardware. + **/ +s32 txgbe_stop_adapter(struct txgbe_hw *hw) +{ + u16 i; + + DEBUGFUNC("\n"); + + /* + * Set the adapter_stopped flag so other driver functions stop touching + * the hardware + */ + hw->adapter_stopped = true; + + /* Disable the receive unit */ + TCALL(hw, mac.ops.disable_rx); + + /* Set interrupt mask to stop interrupts from being generated */ + txgbe_intr_disable(hw, TXGBE_INTR_ALL); + + /* Clear any pending interrupts, flush previous writes */ + wr32(hw, TXGBE_PX_MISC_IC, 0xffffffff); + wr32(hw, TXGBE_BME_CTL, 0x3); + + /* Disable the transmit unit. Each queue must be disabled. */ + for (i = 0; i < hw->mac.max_tx_queues; i++) { + wr32m(hw, TXGBE_PX_TR_CFG(i), + TXGBE_PX_TR_CFG_SWFLSH | TXGBE_PX_TR_CFG_ENABLE, + TXGBE_PX_TR_CFG_SWFLSH); + } + + /* Disable the receive unit by stopping each queue */ + for (i = 0; i < hw->mac.max_rx_queues; i++) { + wr32m(hw, TXGBE_PX_RR_CFG(i), + TXGBE_PX_RR_CFG_RR_EN, 0); + } + + /* flush all queues disables */ + TXGBE_WRITE_FLUSH(hw); + + /* + * Prevent the PCI-E bus from hanging by disabling PCI-E master + * access and verify no pending requests + */ + return txgbe_disable_pcie_master(hw); +} + +/** + * txgbe_led_on - Turns on the software controllable LEDs. + * @hw: pointer to hardware structure + * @index: led number to turn on + **/ +s32 txgbe_led_on(struct txgbe_hw *hw, u32 index) +{ + u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); + u16 value = 0; + DEBUGFUNC("\n"); + + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value | 0x3); + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value | 0x3); + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value | 0x3); + } + /* To turn on the LED, set mode to ON. */ + led_reg |= index | (index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT); + wr32(hw, TXGBE_CFG_LED_CTL, led_reg); + TXGBE_WRITE_FLUSH(hw); + + return 0; +} + +/** + * txgbe_led_off - Turns off the software controllable LEDs. + * @hw: pointer to hardware structure + * @index: led number to turn off + **/ +s32 txgbe_led_off(struct txgbe_hw *hw, u32 index) +{ + u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL); + u16 value = 0; + DEBUGFUNC("\n"); + + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value & 0xFFFC); + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value & 0xFFFC); + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value); + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value & 0xFFFC); + } + + /* To turn off the LED, set mode to OFF. */ + led_reg &= ~(index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT); + led_reg |= index; + wr32(hw, TXGBE_CFG_LED_CTL, led_reg); + TXGBE_WRITE_FLUSH(hw); + return 0; +} + +/** + * txgbe_get_eeprom_semaphore - Get hardware semaphore + * @hw: pointer to hardware structure + * + * Sets the hardware semaphores so EEPROM access can occur for bit-bang method + **/ +STATIC s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw) +{ + s32 status = TXGBE_ERR_EEPROM; + u32 timeout = 2000; + u32 i; + u32 swsm; + + /* Get SMBI software semaphore between device drivers first */ + for (i = 0; i < timeout; i++) { + /* + * If the SMBI bit is 0 when we read it, then the bit will be + * set and we have the semaphore + */ + swsm = rd32(hw, TXGBE_MIS_SWSM); + if (!(swsm & TXGBE_MIS_SWSM_SMBI)) { + status = 0; + break; + } + usec_delay(50); + } + + if (i == timeout) { + DEBUGOUT("Driver can't access the Eeprom - SMBI Semaphore " + "not granted.\n"); + /* + * this release is particularly important because our attempts + * above to get the semaphore may have succeeded, and if there + * was a timeout, we should unconditionally clear the semaphore + * bits to free the driver to make progress + */ + txgbe_release_eeprom_semaphore(hw); + + usec_delay(50); + /* + * one last try + * If the SMBI bit is 0 when we read it, then the bit will be + * set and we have the semaphore + */ + swsm = rd32(hw, TXGBE_MIS_SWSM); + if (!(swsm & TXGBE_MIS_SWSM_SMBI)) + status = 0; + } + + /* Now get the semaphore between SW/FW through the SWESMBI bit */ + if (status == 0) { + for (i = 0; i < timeout; i++) { + if (txgbe_check_mng_access(hw)) { + /* Set the SW EEPROM semaphore bit to request access */ + wr32m(hw, TXGBE_MNG_SW_SM, + TXGBE_MNG_SW_SM_SM, TXGBE_MNG_SW_SM_SM); + + /* + * If we set the bit successfully then we got + * semaphore. + */ + swsm = rd32(hw, TXGBE_MNG_SW_SM); + if (swsm & TXGBE_MNG_SW_SM_SM) + break; + } + usec_delay(50); + } + + /* + * Release semaphores and return error if SW EEPROM semaphore + * was not granted because we don't have access to the EEPROM + */ + if (i >= timeout) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "SWESMBI Software EEPROM semaphore not granted.\n"); + txgbe_release_eeprom_semaphore(hw); + status = TXGBE_ERR_EEPROM; + } + } else { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "Software semaphore SMBI between device drivers " + "not granted.\n"); + } + + return status; +} + +/** + * txgbe_release_eeprom_semaphore - Release hardware semaphore + * @hw: pointer to hardware structure + * + * This function clears hardware semaphore bits. + **/ +STATIC void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw) +{ + if (txgbe_check_mng_access(hw)) { + wr32m(hw, TXGBE_MNG_SW_SM, + TXGBE_MNG_SW_SM_SM, 0); + wr32m(hw, TXGBE_MIS_SWSM, + TXGBE_MIS_SWSM_SMBI, 0); + TXGBE_WRITE_FLUSH(hw); + } +} + +/** + * txgbe_validate_mac_addr - Validate MAC address + * @mac_addr: pointer to MAC address. + * + * Tests a MAC address to ensure it is a valid Individual Address + **/ +s32 txgbe_validate_mac_addr(u8 *mac_addr) +{ + s32 status = 0; + + DEBUGFUNC("\n"); + + /* Make sure it is not a multicast address */ + if (TXGBE_IS_MULTICAST(mac_addr)) { + DEBUGOUT("MAC address is multicast\n"); + status = TXGBE_ERR_INVALID_MAC_ADDR; + /* Not a broadcast address */ + } else if (TXGBE_IS_BROADCAST(mac_addr)) { + DEBUGOUT("MAC address is broadcast\n"); + status = TXGBE_ERR_INVALID_MAC_ADDR; + /* Reject the zero address */ + } else if (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 && + mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0) { + DEBUGOUT("MAC address is all zeros\n"); + status = TXGBE_ERR_INVALID_MAC_ADDR; + } + return status; +} + +/** + * txgbe_set_rar - Set Rx address register + * @hw: pointer to hardware structure + * @index: Receive address register to write + * @addr: Address to put into receive address register + * @vmdq: VMDq "set" or "pool" index + * @enable_addr: set flag that address is active + * + * Puts an ethernet address into a receive address register. + **/ +s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools, + u32 enable_addr) +{ + u32 rar_low, rar_high; + u32 rar_entries = hw->mac.num_rar_entries; + + DEBUGFUNC("\n"); + + /* Make sure we are using a valid rar index range */ + if (index >= rar_entries) { + ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", index); + return TXGBE_ERR_INVALID_ARGUMENT; + } + + /* select the MAC address */ + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index); + + /* setup VMDq pool mapping */ + wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF); + wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, pools >> 32); + + /* + * HW expects these in little endian so we reverse the byte + * order from network order (big endian) to little endian + * + * Some parts put the VMDq setting in the extra RAH bits, + * so save everything except the lower 16 bits that hold part + * of the address and the address valid bit. + */ + rar_low = ((u32)addr[5] | + ((u32)addr[4] << 8) | + ((u32)addr[3] << 16) | + ((u32)addr[2] << 24)); + rar_high = ((u32)addr[1] | + ((u32)addr[0] << 8)); + if (enable_addr != 0) + rar_high |= TXGBE_PSR_MAC_SWC_AD_H_AV; + + wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, rar_low); + wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H, + (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) | + TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) | + TXGBE_PSR_MAC_SWC_AD_H_AV), + rar_high); + + return 0; +} + +/** + * txgbe_clear_rar - Remove Rx address register + * @hw: pointer to hardware structure + * @index: Receive address register to write + * + * Clears an ethernet address from a receive address register. + **/ +s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index) +{ + u32 rar_entries = hw->mac.num_rar_entries; + + DEBUGFUNC("\n"); + + /* Make sure we are using a valid rar index range */ + if (index >= rar_entries) { + ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", index); + return TXGBE_ERR_INVALID_ARGUMENT; + } + + /* + * Some parts put the VMDq setting in the extra RAH bits, + * so save everything except the lower 16 bits that hold part + * of the address and the address valid bit. + */ + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index); + + wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0); + wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0); + + wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0); + wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H, + (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) | + TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) | + TXGBE_PSR_MAC_SWC_AD_H_AV), + 0); + + return 0; +} + +/** + * txgbe_init_rx_addrs - Initializes receive address filters. + * @hw: pointer to hardware structure + * + * Places the MAC address in receive address register 0 and clears the rest + * of the receive address registers. Clears the multicast table. Assumes + * the receiver is in reset when the routine is called. + **/ +s32 txgbe_init_rx_addrs(struct txgbe_hw *hw) +{ + u32 i; + u32 rar_entries = hw->mac.num_rar_entries; + u32 psrctl; + + DEBUGFUNC("\n"); + + /* + * If the current mac address is valid, assume it is a software override + * to the permanent address. + * Otherwise, use the permanent address from the eeprom. + */ + if (txgbe_validate_mac_addr(hw->mac.addr) == + TXGBE_ERR_INVALID_MAC_ADDR) { + /* Get the MAC address from the RAR0 for later reference */ + TCALL(hw, mac.ops.get_mac_addr, hw->mac.addr); + + DEBUGOUT3(" Keeping Current RAR0 Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n", + hw->mac.addr[0], hw->mac.addr[1], + hw->mac.addr[2], hw->mac.addr[3], + hw->mac.addr[4], hw->mac.addr[5]); + } else { + /* Setup the receive address. */ + DEBUGOUT("Overriding MAC Address in RAR[0]\n"); + DEBUGOUT3(" New MAC Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n", + hw->mac.addr[0], hw->mac.addr[1], + hw->mac.addr[2], hw->mac.addr[3], + hw->mac.addr[4], hw->mac.addr[5]); + + TCALL(hw, mac.ops.set_rar, 0, hw->mac.addr, 0, + TXGBE_PSR_MAC_SWC_AD_H_AV); + + /* clear VMDq pool/queue selection for RAR 0 */ + TCALL(hw, mac.ops.clear_vmdq, 0, TXGBE_CLEAR_VMDQ_ALL); + } + hw->addr_ctrl.overflow_promisc = 0; + + hw->addr_ctrl.rar_used_count = 1; + + /* Zero out the other receive addresses. */ + DEBUGOUT1("Clearing RAR[1-%d]\n", rar_entries - 1); + for (i = 1; i < rar_entries; i++) { + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i); + wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0); + wr32(hw, TXGBE_PSR_MAC_SWC_AD_H, 0); + } + + /* Clear the MTA */ + hw->addr_ctrl.mta_in_use = 0; + psrctl = rd32(hw, TXGBE_PSR_CTL); + psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE); + psrctl |= hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT; + wr32(hw, TXGBE_PSR_CTL, psrctl); + DEBUGOUT(" Clearing MTA\n"); + for (i = 0; i < hw->mac.mcft_size; i++) + wr32(hw, TXGBE_PSR_MC_TBL(i), 0); + + TCALL(hw, mac.ops.init_uta_tables); + + return 0; +} + +/** + * txgbe_add_uc_addr - Adds a secondary unicast address. + * @hw: pointer to hardware structure + * @addr: new address + * + * Adds it to unused receive address register or goes into promiscuous mode. + **/ +void txgbe_add_uc_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq) +{ + u32 rar_entries = hw->mac.num_rar_entries; + u32 rar; + + DEBUGFUNC("\n"); + + DEBUGOUT6(" UC Addr = %.2X %.2X %.2X %.2X %.2X %.2X\n", + addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]); + + /* + * Place this address in the RAR if there is room, + * else put the controller into promiscuous mode + */ + if (hw->addr_ctrl.rar_used_count < rar_entries) { + rar = hw->addr_ctrl.rar_used_count; + TCALL(hw, mac.ops.set_rar, rar, addr, vmdq, + TXGBE_PSR_MAC_SWC_AD_H_AV); + DEBUGOUT1("Added a secondary address to RAR[%d]\n", rar); + hw->addr_ctrl.rar_used_count++; + } else { + hw->addr_ctrl.overflow_promisc++; + } + + DEBUGOUT("txgbe_add_uc_addr Complete\n"); +} + +/** + * txgbe_update_uc_addr_list - Updates MAC list of secondary addresses + * @hw: pointer to hardware structure + * @addr_list: the list of new addresses + * @addr_count: number of addresses + * @next: iterator function to walk the address list + * + * The given list replaces any existing list. Clears the secondary addrs from + * receive address registers. Uses unused receive address registers for the + * first secondary addresses, and falls back to promiscuous mode as needed. + * + * Drivers using secondary unicast addresses must set user_set_promisc when + * manually putting the device into promiscuous mode. + **/ +s32 txgbe_update_uc_addr_list(struct txgbe_hw *hw, u8 *addr_list, + u32 addr_count, txgbe_mc_addr_itr next) +{ + u8 *addr; + u32 i; + u32 old_promisc_setting = hw->addr_ctrl.overflow_promisc; + u32 uc_addr_in_use; + u32 vmdq; + + DEBUGFUNC("\n"); + + /* + * Clear accounting of old secondary address list, + * don't count RAR[0] + */ + uc_addr_in_use = hw->addr_ctrl.rar_used_count - 1; + hw->addr_ctrl.rar_used_count -= uc_addr_in_use; + hw->addr_ctrl.overflow_promisc = 0; + + /* Zero out the other receive addresses */ + DEBUGOUT1("Clearing RAR[1-%d]\n", uc_addr_in_use+1); + for (i = 0; i < uc_addr_in_use; i++) { + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 1+i); + wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0); + wr32(hw, TXGBE_PSR_MAC_SWC_AD_H, 0); + } + + /* Add the new addresses */ + for (i = 0; i < addr_count; i++) { + DEBUGOUT(" Adding the secondary addresses:\n"); + addr = next(hw, &addr_list, &vmdq); + txgbe_add_uc_addr(hw, addr, vmdq); + } + + if (hw->addr_ctrl.overflow_promisc) { + /* enable promisc if not already in overflow or set by user */ + if (!old_promisc_setting && !hw->addr_ctrl.user_set_promisc) { + DEBUGOUT(" Entering address overflow promisc mode\n"); + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_UPE, TXGBE_PSR_CTL_UPE); + } + } else { + /* only disable if set by overflow, not by user */ + if (old_promisc_setting && !hw->addr_ctrl.user_set_promisc) { + DEBUGOUT(" Leaving address overflow promisc mode\n"); + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_UPE, 0); + } + } + + DEBUGOUT("txgbe_update_uc_addr_list Complete\n"); + return 0; +} + +/** + * txgbe_mta_vector - Determines bit-vector in multicast table to set + * @hw: pointer to hardware structure + * @mc_addr: the multicast address + * + * Extracts the 12 bits, from a multicast address, to determine which + * bit-vector to set in the multicast table. The hardware uses 12 bits, from + * incoming rx multicast addresses, to determine the bit-vector to check in + * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set + * by the MO field of the MCSTCTRL. The MO field is set during initialization + * to mc_filter_type. + **/ +STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr) +{ + u32 vector = 0; + + DEBUGFUNC("\n"); + + switch (hw->mac.mc_filter_type) { + case 0: /* use bits [47:36] of the address */ + vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4)); + break; + case 1: /* use bits [46:35] of the address */ + vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5)); + break; + case 2: /* use bits [45:34] of the address */ + vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6)); + break; + case 3: /* use bits [43:32] of the address */ + vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8)); + break; + default: /* Invalid mc_filter_type */ + DEBUGOUT("MC filter type param set incorrectly\n"); + ASSERT(0); + break; + } + + /* vector can only be 12-bits or boundary will be exceeded */ + vector &= 0xFFF; + return vector; +} + +/** + * txgbe_set_mta - Set bit-vector in multicast table + * @hw: pointer to hardware structure + * @hash_value: Multicast address hash value + * + * Sets the bit-vector in the multicast table. + **/ +void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr) +{ + u32 vector; + u32 vector_bit; + u32 vector_reg; + + DEBUGFUNC("\n"); + + hw->addr_ctrl.mta_in_use++; + + vector = txgbe_mta_vector(hw, mc_addr); + DEBUGOUT1(" bit-vector = 0x%03X\n", vector); + + /* + * The MTA is a register array of 128 32-bit registers. It is treated + * like an array of 4096 bits. We want to set bit + * BitArray[vector_value]. So we figure out what register the bit is + * in, read it, OR in the new bit, then write back the new value. The + * register is determined by the upper 7 bits of the vector value and + * the bit within that register are determined by the lower 5 bits of + * the value. + */ + vector_reg = (vector >> 5) & 0x7F; + vector_bit = vector & 0x1F; + hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit); +} + +/** + * txgbe_update_mc_addr_list - Updates MAC list of multicast addresses + * @hw: pointer to hardware structure + * @mc_addr_list: the list of new multicast addresses + * @mc_addr_count: number of addresses + * @next: iterator function to walk the multicast address list + * @clear: flag, when set clears the table beforehand + * + * When the clear flag is set, the given list replaces any existing list. + * Hashes the given addresses into the multicast table. + **/ +s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, txgbe_mc_addr_itr next, + bool clear) +{ + u32 i; + u32 vmdq; + u32 psrctl; + + DEBUGFUNC("\n"); + + /* + * Set the new number of MC addresses that we are being requested to + * use. + */ + hw->addr_ctrl.num_mc_addrs = mc_addr_count; + hw->addr_ctrl.mta_in_use = 0; + + /* Clear mta_shadow */ + if (clear) { + DEBUGOUT(" Clearing MTA\n"); + memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow)); + } + + /* Update mta_shadow */ + for (i = 0; i < mc_addr_count; i++) { + DEBUGOUT(" Adding the multicast addresses:\n"); + txgbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq)); + } + + /* Enable mta */ + for (i = 0; i < hw->mac.mcft_size; i++) + wr32a(hw, TXGBE_PSR_MC_TBL(0), i, + hw->mac.mta_shadow[i]); + + if (hw->addr_ctrl.mta_in_use > 0) { + psrctl = rd32(hw, TXGBE_PSR_CTL); + psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE); + psrctl |= TXGBE_PSR_CTL_MFE | + (hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT); + wr32(hw, TXGBE_PSR_CTL, psrctl); + } + + DEBUGOUT("txgbe_update_mc_addr_list Complete\n"); + return 0; +} + +/** + * txgbe_enable_mc - Enable multicast address in RAR + * @hw: pointer to hardware structure + * + * Enables multicast address in RAR and the use of the multicast hash table. + **/ +s32 txgbe_enable_mc(struct txgbe_hw *hw) +{ + struct txgbe_addr_filter_info *a = &hw->addr_ctrl; + u32 psrctl; + + DEBUGFUNC("\n"); + + if (a->mta_in_use > 0) { + psrctl = rd32(hw, TXGBE_PSR_CTL); + psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE); + psrctl |= TXGBE_PSR_CTL_MFE | + (hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT); + wr32(hw, TXGBE_PSR_CTL, psrctl); + } + + return 0; +} + +/** + * txgbe_disable_mc - Disable multicast address in RAR + * @hw: pointer to hardware structure + * + * Disables multicast address in RAR and the use of the multicast hash table. + **/ +s32 txgbe_disable_mc(struct txgbe_hw *hw) +{ + struct txgbe_addr_filter_info *a = &hw->addr_ctrl; + u32 psrctl; + DEBUGFUNC("\n"); + + if (a->mta_in_use > 0) { + psrctl = rd32(hw, TXGBE_PSR_CTL); + psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE); + psrctl |= hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT; + wr32(hw, TXGBE_PSR_CTL, psrctl); + } + + return 0; +} + +/** + * txgbe_fc_enable - Enable flow control + * @hw: pointer to hardware structure + * + * Enable flow control according to the current settings. + **/ +s32 txgbe_fc_enable(struct txgbe_hw *hw) +{ + s32 ret_val = 0; + u32 mflcn_reg, fccfg_reg; + u32 reg; + u32 fcrtl, fcrth; + int i; + + DEBUGFUNC("\n"); + + /* Validate the water mark configuration */ + if (!hw->fc.pause_time) { + ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + + /* Low water mark of zero causes XOFF floods */ + for (i = 0; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++) { + if ((hw->fc.current_mode & txgbe_fc_tx_pause) && + hw->fc.high_water[i]) { + if (!hw->fc.low_water[i] || + hw->fc.low_water[i] >= hw->fc.high_water[i]) { + DEBUGOUT("Invalid water mark configuration\n"); + ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + } + } + + /* Negotiate the fc mode to use */ + txgbe_fc_autoneg(hw); + + /* Disable any previous flow control settings */ + mflcn_reg = rd32(hw, TXGBE_MAC_RX_FLOW_CTRL); + mflcn_reg &= ~(TXGBE_MAC_RX_FLOW_CTRL_PFCE | + TXGBE_MAC_RX_FLOW_CTRL_RFE); + + fccfg_reg = rd32(hw, TXGBE_RDB_RFCC); + fccfg_reg &= ~(TXGBE_RDB_RFCC_RFCE_802_3X | + TXGBE_RDB_RFCC_RFCE_PRIORITY); + + /* + * The possible values of fc.current_mode are: + * 0: Flow control is completely disabled + * 1: Rx flow control is enabled (we can receive pause frames, + * but not send pause frames). + * 2: Tx flow control is enabled (we can send pause frames but + * we do not support receiving pause frames). + * 3: Both Rx and Tx flow control (symmetric) are enabled. + * other: Invalid. + */ + switch (hw->fc.current_mode) { + case txgbe_fc_none: + /* + * Flow control is disabled by software override or autoneg. + * The code below will actually disable it in the HW. + */ + break; + case txgbe_fc_rx_pause: + /* + * Rx Flow control is enabled and Tx Flow control is + * disabled by software override. Since there really + * isn't a way to advertise that we are capable of RX + * Pause ONLY, we will advertise that we support both + * symmetric and asymmetric Rx PAUSE. Later, we will + * disable the adapter's ability to send PAUSE frames. + */ + mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE; + break; + case txgbe_fc_tx_pause: + /* + * Tx Flow control is enabled, and Rx Flow control is + * disabled by software override. + */ + fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X; + break; + case txgbe_fc_full: + /* Flow control (both Rx and Tx) is enabled by SW override. */ + mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE; + fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X; + break; + default: + ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, + "Flow control param set incorrectly\n"); + ret_val = TXGBE_ERR_CONFIG; + goto out; + break; + } + + /* Set 802.3x based flow control settings. */ + wr32(hw, TXGBE_MAC_RX_FLOW_CTRL, mflcn_reg); + wr32(hw, TXGBE_RDB_RFCC, fccfg_reg); + + /* Set up and enable Rx high/low water mark thresholds, enable XON. */ + for (i = 0; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++) { + if ((hw->fc.current_mode & txgbe_fc_tx_pause) && + hw->fc.high_water[i]) { + fcrtl = (hw->fc.low_water[i] << 10) | + TXGBE_RDB_RFCL_XONE; + wr32(hw, TXGBE_RDB_RFCL(i), fcrtl); + fcrth = (hw->fc.high_water[i] << 10) | + TXGBE_RDB_RFCH_XOFFE; + } else { + wr32(hw, TXGBE_RDB_RFCL(i), 0); + /* + * In order to prevent Tx hangs when the internal Tx + * switch is enabled we must set the high water mark + * to the Rx packet buffer size - 24KB. This allows + * the Tx switch to function even under heavy Rx + * workloads. + */ + fcrth = rd32(hw, TXGBE_RDB_PB_SZ(i)) - 24576; + } + + wr32(hw, TXGBE_RDB_RFCH(i), fcrth); + } + + /* Configure pause time (2 TCs per register) */ + reg = hw->fc.pause_time * 0x00010001; + for (i = 0; i < (TXGBE_DCB_MAX_TRAFFIC_CLASS / 2); i++) + wr32(hw, TXGBE_RDB_RFCV(i), reg); + + /* Configure flow control refresh threshold value */ + wr32(hw, TXGBE_RDB_RFCRT, hw->fc.pause_time / 2); + +out: + return ret_val; +} + +/** + * txgbe_negotiate_fc - Negotiate flow control + * @hw: pointer to hardware structure + * @adv_reg: flow control advertised settings + * @lp_reg: link partner's flow control settings + * @adv_sym: symmetric pause bit in advertisement + * @adv_asm: asymmetric pause bit in advertisement + * @lp_sym: symmetric pause bit in link partner advertisement + * @lp_asm: asymmetric pause bit in link partner advertisement + * + * Find the intersection between advertised settings and link partner's + * advertised settings + **/ +STATIC s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg, + u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm) +{ + if ((!(adv_reg)) || (!(lp_reg))) { + ERROR_REPORT3(TXGBE_ERROR_UNSUPPORTED, + "Local or link partner's advertised flow control " + "settings are NULL. Local: %x, link partner: %x\n", + adv_reg, lp_reg); + return TXGBE_ERR_FC_NOT_NEGOTIATED; + } + + if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) { + /* + * Now we need to check if the user selected Rx ONLY + * of pause frames. In this case, we had to advertise + * FULL flow control because we could not advertise RX + * ONLY. Hence, we must now check to see if we need to + * turn OFF the TRANSMISSION of PAUSE frames. + */ + if (hw->fc.requested_mode == txgbe_fc_full) { + hw->fc.current_mode = txgbe_fc_full; + DEBUGOUT("Flow Control = FULL.\n"); + } else { + hw->fc.current_mode = txgbe_fc_rx_pause; + DEBUGOUT("Flow Control=RX PAUSE frames only\n"); + } + } else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) && + (lp_reg & lp_sym) && (lp_reg & lp_asm)) { + hw->fc.current_mode = txgbe_fc_tx_pause; + DEBUGOUT("Flow Control = TX PAUSE frames only.\n"); + } else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) && + !(lp_reg & lp_sym) && (lp_reg & lp_asm)) { + hw->fc.current_mode = txgbe_fc_rx_pause; + DEBUGOUT("Flow Control = RX PAUSE frames only.\n"); + } else { + hw->fc.current_mode = txgbe_fc_none; + DEBUGOUT("Flow Control = NONE.\n"); + } + return 0; +} + +/** + * txgbe_fc_autoneg_fiber - Enable flow control on 1 gig fiber + * @hw: pointer to hardware structure + * + * Enable flow control according on 1 gig fiber. + **/ +STATIC s32 txgbe_fc_autoneg_fiber(struct txgbe_hw *hw) +{ + u32 pcs_anadv_reg, pcs_lpab_reg; + s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED; + + pcs_anadv_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV); + pcs_lpab_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_LP_BABL); + + ret_val = txgbe_negotiate_fc(hw, pcs_anadv_reg, + pcs_lpab_reg, + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM, + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM, + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM, + TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM); + + return ret_val; +} + +/** + * txgbe_fc_autoneg_backplane - Enable flow control IEEE clause 37 + * @hw: pointer to hardware structure + * + * Enable flow control according to IEEE clause 37. + **/ +STATIC s32 txgbe_fc_autoneg_backplane(struct txgbe_hw *hw) +{ + u32 anlp1_reg, autoc_reg; + s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED; + + /* + * Read the 10g AN autoc and LP ability registers and resolve + * local flow control settings accordingly + */ + autoc_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1); + anlp1_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_LP_ABL1); + + ret_val = txgbe_negotiate_fc(hw, autoc_reg, + anlp1_reg, TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM, + TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM, + TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM, + TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM); + + return ret_val; +} + +/** + * txgbe_fc_autoneg_copper - Enable flow control IEEE clause 37 + * @hw: pointer to hardware structure + * + * Enable flow control according to IEEE clause 37. + **/ +STATIC s32 txgbe_fc_autoneg_copper(struct txgbe_hw *hw) +{ + u8 technology_ability_reg = 0; + u8 lp_technology_ability_reg = 0; + + txgbe_get_phy_advertised_pause(hw, &technology_ability_reg); + txgbe_get_lp_advertised_pause(hw, &lp_technology_ability_reg); + + return txgbe_negotiate_fc(hw, (u32)technology_ability_reg, + (u32)lp_technology_ability_reg, + TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE, + TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE); +} + +/** + * txgbe_fc_autoneg - Configure flow control + * @hw: pointer to hardware structure + * + * Compares our advertised flow control capabilities to those advertised by + * our link partner, and determines the proper flow control mode to use. + **/ +void txgbe_fc_autoneg(struct txgbe_hw *hw) +{ + s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED; + u32 speed; + bool link_up; + + DEBUGFUNC("\n"); + + /* + * AN should have completed when the cable was plugged in. + * Look for reasons to bail out. Bail out if: + * - FC autoneg is disabled, or if + * - link is not up. + */ + if (hw->fc.disable_fc_autoneg) { + ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED, + "Flow control autoneg is disabled"); + goto out; + } + + TCALL(hw, mac.ops.check_link, &speed, &link_up, false); + if (!link_up) { + ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, "The link is down"); + goto out; + } + + switch (hw->phy.media_type) { + /* Autoneg flow control on fiber adapters */ + case txgbe_media_type_fiber: + if (speed == TXGBE_LINK_SPEED_1GB_FULL) + ret_val = txgbe_fc_autoneg_fiber(hw); + break; + + /* Autoneg flow control on backplane adapters */ + case txgbe_media_type_backplane: + ret_val = txgbe_fc_autoneg_backplane(hw); + break; + + /* Autoneg flow control on copper adapters */ + case txgbe_media_type_copper: + if (txgbe_device_supports_autoneg_fc(hw)) + ret_val = txgbe_fc_autoneg_copper(hw); + break; + + default: + break; + } + +out: + if (ret_val == 0) { + hw->fc.fc_was_autonegged = true; + } else { + hw->fc.fc_was_autonegged = false; + hw->fc.current_mode = hw->fc.requested_mode; + } +} + +/** + * txgbe_disable_pcie_master - Disable PCI-express master access + * @hw: pointer to hardware structure + * + * Disables PCI-Express master access and verifies there are no pending + * requests. TXGBE_ERR_MASTER_REQUESTS_PENDING is returned if master disable + * bit hasn't caused the master requests to be disabled, else 0 + * is returned signifying master requests disabled. + **/ +s32 txgbe_disable_pcie_master(struct txgbe_hw *hw) +{ + s32 status = 0; + u32 i; + struct txgbe_adapter *adapter = hw->back; + unsigned int num_vfs = adapter->num_vfs; + u16 dev_ctl; + u32 vf_bme_clear = 0; + + DEBUGFUNC("\n"); + + /* Always set this bit to ensure any future transactions are blocked */ + pci_clear_master(((struct txgbe_adapter *)hw->back)->pdev); + + /* Exit if master requests are blocked */ + if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING)) || + TXGBE_REMOVED(hw->hw_addr)) + goto out; + + /* BME disable handshake will not be finished if any VF BME is 0 */ + for (i = 0; i < num_vfs; i++) { + struct pci_dev *vfdev = adapter->vfinfo[i].vfdev; + if (!vfdev) + continue; + pci_read_config_word(vfdev, 0x4, &dev_ctl); + if ((dev_ctl & 0x4) == 0) { + vf_bme_clear = 1; + break; + } + } + + /* Poll for master request bit to clear */ + for (i = 0; i < TXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) { + usec_delay(100); + if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING))) + goto out; + } + + if (!vf_bme_clear) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "PCIe transaction pending bit did not clear.\n"); + status = TXGBE_ERR_MASTER_REQUESTS_PENDING; + } + +out: + return status; +} + + +/** + * txgbe_acquire_swfw_sync - Acquire SWFW semaphore + * @hw: pointer to hardware structure + * @mask: Mask to specify which semaphore to acquire + * + * Acquires the SWFW semaphore through the GSSR register for the specified + * function (CSR, PHY0, PHY1, EEPROM, Flash) + **/ +s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask) +{ + u32 gssr = 0; + u32 swmask = mask; + u32 fwmask = mask << 16; + u32 timeout = 200; + u32 i; + + for (i = 0; i < timeout; i++) { + /* + * SW NVM semaphore bit is used for access to all + * SW_FW_SYNC bits (not just NVM) + */ + if (txgbe_get_eeprom_semaphore(hw)) + return TXGBE_ERR_SWFW_SYNC; + + if (txgbe_check_mng_access(hw)) { + gssr = rd32(hw, TXGBE_MNG_SWFW_SYNC); + if (!(gssr & (fwmask | swmask))) { + gssr |= swmask; + wr32(hw, TXGBE_MNG_SWFW_SYNC, gssr); + txgbe_release_eeprom_semaphore(hw); + return 0; + } else { + /* Resource is currently in use by FW or SW */ + txgbe_release_eeprom_semaphore(hw); + msec_delay(5); + } + } + } + + /* If time expired clear the bits holding the lock and retry */ + if (gssr & (fwmask | swmask)) + txgbe_release_swfw_sync(hw, gssr & (fwmask | swmask)); + + msec_delay(5); + return TXGBE_ERR_SWFW_SYNC; +} + +/** + * txgbe_release_swfw_sync - Release SWFW semaphore + * @hw: pointer to hardware structure + * @mask: Mask to specify which semaphore to release + * + * Releases the SWFW semaphore through the GSSR register for the specified + * function (CSR, PHY0, PHY1, EEPROM, Flash) + **/ +void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask) +{ + txgbe_get_eeprom_semaphore(hw); + if (txgbe_check_mng_access(hw)) + wr32m(hw, TXGBE_MNG_SWFW_SYNC, mask, 0); + + txgbe_release_eeprom_semaphore(hw); +} + +/** + * txgbe_disable_sec_rx_path - Stops the receive data path + * @hw: pointer to hardware structure + * + * Stops the receive data path and waits for the HW to internally empty + * the Rx security block + **/ +s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw) +{ +#define TXGBE_MAX_SECRX_POLL 40 + + int i; + int secrxreg; + + DEBUGFUNC("\n"); + + wr32m(hw, TXGBE_RSC_CTL, + TXGBE_RSC_CTL_RX_DIS, TXGBE_RSC_CTL_RX_DIS); + for (i = 0; i < TXGBE_MAX_SECRX_POLL; i++) { + secrxreg = rd32(hw, TXGBE_RSC_ST); + if (secrxreg & TXGBE_RSC_ST_RSEC_RDY) + break; + else + /* Use interrupt-safe sleep just in case */ + usec_delay(1000); + } + + /* For informational purposes only */ + if (i >= TXGBE_MAX_SECRX_POLL) + DEBUGOUT("Rx unit being enabled before security " + "path fully disabled. Continuing with init.\n"); + + return 0; +} + +/** + * txgbe_enable_sec_rx_path - Enables the receive data path + * @hw: pointer to hardware structure + * + * Enables the receive data path. + **/ +s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw) +{ + DEBUGFUNC("\n"); + + wr32m(hw, TXGBE_RSC_CTL, + TXGBE_RSC_CTL_RX_DIS, 0); + TXGBE_WRITE_FLUSH(hw); + + return 0; +} + +/** + * txgbe_get_san_mac_addr_offset - Get SAN MAC address offset from the EEPROM + * @hw: pointer to hardware structure + * @san_mac_offset: SAN MAC address offset + * + * This function will read the EEPROM location for the SAN MAC address + * pointer, and returns the value at that location. This is used in both + * get and set mac_addr routines. + **/ +STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw, + u16 *san_mac_offset) +{ + s32 ret_val; + + DEBUGFUNC("\n"); + + /* + * First read the EEPROM pointer to see if the MAC addresses are + * available. + */ + ret_val = TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_SAN_MAC_ADDR_PTR, + san_mac_offset); + if (ret_val) { + ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE, + "eeprom at offset %d failed", + TXGBE_SAN_MAC_ADDR_PTR); + } + + return ret_val; +} + +/** + * txgbe_get_san_mac_addr - SAN MAC address retrieval from the EEPROM + * @hw: pointer to hardware structure + * @san_mac_addr: SAN MAC address + * + * Reads the SAN MAC address from the EEPROM, if it's available. This is + * per-port, so set_lan_id() must be called before reading the addresses. + * set_lan_id() is called by identify_sfp(), but this cannot be relied + * upon for non-SFP connections, so we must call it here. + **/ +s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr) +{ + u16 san_mac_data, san_mac_offset; + u8 i; + s32 ret_val; + + DEBUGFUNC("\n"); + + /* + * First read the EEPROM pointer to see if the MAC addresses are + * available. If they're not, no point in calling set_lan_id() here. + */ + ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset); + if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF) + goto san_mac_addr_out; + + /* apply the port offset to the address offset */ + (hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) : + (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET); + for (i = 0; i < 3; i++) { + ret_val = TCALL(hw, eeprom.ops.read, san_mac_offset, + &san_mac_data); + if (ret_val) { + ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", + san_mac_offset); + goto san_mac_addr_out; + } + san_mac_addr[i * 2] = (u8)(san_mac_data); + san_mac_addr[i * 2 + 1] = (u8)(san_mac_data >> 8); + san_mac_offset++; + } + return 0; + +san_mac_addr_out: + /* + * No addresses available in this EEPROM. It's not an + * error though, so just wipe the local address and return. + */ + for (i = 0; i < 6; i++) + san_mac_addr[i] = 0xFF; + return 0; +} + +/** + * txgbe_set_san_mac_addr - Write the SAN MAC address to the EEPROM + * @hw: pointer to hardware structure + * @san_mac_addr: SAN MAC address + * + * Write a SAN MAC address to the EEPROM. + **/ +s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr) +{ + s32 ret_val; + u16 san_mac_data, san_mac_offset; + u8 i; + + DEBUGFUNC("\n"); + + /* Look for SAN mac address pointer. If not defined, return */ + ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset); + if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF) + return TXGBE_ERR_NO_SAN_ADDR_PTR; + + /* Apply the port offset to the address offset */ + (hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) : + (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET); + + for (i = 0; i < 3; i++) { + san_mac_data = (u16)((u16)(san_mac_addr[i * 2 + 1]) << 8); + san_mac_data |= (u16)(san_mac_addr[i * 2]); + TCALL(hw, eeprom.ops.write, san_mac_offset, san_mac_data); + san_mac_offset++; + } + + return 0; +} + +/** + * txgbe_insert_mac_addr - Find a RAR for this mac address + * @hw: pointer to hardware structure + * @addr: Address to put into receive address register + * @vmdq: VMDq pool to assign + * + * Puts an ethernet address into a receive address register, or + * finds the rar that it is aleady in; adds to the pool list + **/ +s32 txgbe_insert_mac_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq) +{ + static const u32 NO_EMPTY_RAR_FOUND = 0xFFFFFFFF; + u32 first_empty_rar = NO_EMPTY_RAR_FOUND; + u32 rar; + u32 rar_low, rar_high; + u32 addr_low, addr_high; + + DEBUGFUNC("\n"); + + /* swap bytes for HW little endian */ + addr_low = addr[5] | (addr[4] << 8) + | (addr[3] << 16) + | (addr[2] << 24); + addr_high = addr[1] | (addr[0] << 8); + + /* + * Either find the mac_id in rar or find the first empty space. + * rar_highwater points to just after the highest currently used + * rar in order to shorten the search. It grows when we add a new + * rar to the top. + */ + for (rar = 0; rar < hw->mac.rar_highwater; rar++) { + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar); + rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H); + + if (((TXGBE_PSR_MAC_SWC_AD_H_AV & rar_high) == 0) + && first_empty_rar == NO_EMPTY_RAR_FOUND) { + first_empty_rar = rar; + } else if ((rar_high & 0xFFFF) == addr_high) { + rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L); + if (rar_low == addr_low) + break; /* found it already in the rars */ + } + } + + if (rar < hw->mac.rar_highwater) { + /* already there so just add to the pool bits */ + TCALL(hw, mac.ops.set_vmdq, rar, vmdq); + } else if (first_empty_rar != NO_EMPTY_RAR_FOUND) { + /* stick it into first empty RAR slot we found */ + rar = first_empty_rar; + TCALL(hw, mac.ops.set_rar, rar, addr, vmdq, + TXGBE_PSR_MAC_SWC_AD_H_AV); + } else if (rar == hw->mac.rar_highwater) { + /* add it to the top of the list and inc the highwater mark */ + TCALL(hw, mac.ops.set_rar, rar, addr, vmdq, + TXGBE_PSR_MAC_SWC_AD_H_AV); + hw->mac.rar_highwater++; + } else if (rar >= hw->mac.num_rar_entries) { + return TXGBE_ERR_INVALID_MAC_ADDR; + } + + /* + * If we found rar[0], make sure the default pool bit (we use pool 0) + * remains cleared to be sure default pool packets will get delivered + */ + if (rar == 0) + TCALL(hw, mac.ops.clear_vmdq, rar, 0); + + return rar; +} + +/** + * txgbe_clear_vmdq - Disassociate a VMDq pool index from a rx address + * @hw: pointer to hardware struct + * @rar: receive address register index to disassociate + * @vmdq: VMDq pool index to remove from the rar + **/ +s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq) +{ + u32 mpsar_lo, mpsar_hi; + u32 rar_entries = hw->mac.num_rar_entries; + + DEBUGFUNC("\n"); + UNREFERENCED_PARAMETER(vmdq); + + /* Make sure we are using a valid rar index range */ + if (rar >= rar_entries) { + ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", rar); + return TXGBE_ERR_INVALID_ARGUMENT; + } + + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar); + mpsar_lo = rd32(hw, TXGBE_PSR_MAC_SWC_VM_L); + mpsar_hi = rd32(hw, TXGBE_PSR_MAC_SWC_VM_H); + + if (TXGBE_REMOVED(hw->hw_addr)) + goto done; + + if (!mpsar_lo && !mpsar_hi) + goto done; + + /* was that the last pool using this rar? */ + if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0) + TCALL(hw, mac.ops.clear_rar, rar); +done: + return 0; +} + +/** + * txgbe_set_vmdq - Associate a VMDq pool index with a rx address + * @hw: pointer to hardware struct + * @rar: receive address register index to associate with a VMDq index + * @vmdq: VMDq pool index + **/ +s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 pool) +{ + u32 rar_entries = hw->mac.num_rar_entries; + + DEBUGFUNC("\n"); + UNREFERENCED_PARAMETER(pool); + + /* Make sure we are using a valid rar index range */ + if (rar >= rar_entries) { + ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", rar); + return TXGBE_ERR_INVALID_ARGUMENT; + } + + return 0; +} + +/** + * This function should only be involved in the IOV mode. + * In IOV mode, Default pool is next pool after the number of + * VFs advertized and not 0. + * MPSAR table needs to be updated for SAN_MAC RAR [hw->mac.san_mac_rar_index] + * + * txgbe_set_vmdq_san_mac - Associate default VMDq pool index with a rx address + * @hw: pointer to hardware struct + * @vmdq: VMDq pool index + **/ +s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq) +{ + u32 rar = hw->mac.san_mac_rar_index; + + DEBUGFUNC("\n"); + + wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar); + if (vmdq < 32) { + wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 1 << vmdq); + wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0); + } else { + wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0); + wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 1 << (vmdq - 32)); + } + + return 0; +} + +/** + * txgbe_init_uta_tables - Initialize the Unicast Table Array + * @hw: pointer to hardware structure + **/ +s32 txgbe_init_uta_tables(struct txgbe_hw *hw) +{ + int i; + + DEBUGFUNC("\n"); + DEBUGOUT(" Clearing UTA\n"); + + for (i = 0; i < 128; i++) + wr32(hw, TXGBE_PSR_UC_TBL(i), 0); + + return 0; +} + +/** + * txgbe_find_vlvf_slot - find the vlanid or the first empty slot + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * + * return the VLVF index where this VLAN id should be placed + * + **/ +s32 txgbe_find_vlvf_slot(struct txgbe_hw *hw, u32 vlan) +{ + u32 bits = 0; + u32 first_empty_slot = 0; + s32 regindex; + + /* short cut the special case */ + if (vlan == 0) + return 0; + + /* + * Search for the vlan id in the VLVF entries. Save off the first empty + * slot found along the way + */ + for (regindex = 1; regindex < TXGBE_PSR_VLAN_SWC_ENTRIES; regindex++) { + wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, regindex); + bits = rd32(hw, TXGBE_PSR_VLAN_SWC); + if (!bits && !(first_empty_slot)) + first_empty_slot = regindex; + else if ((bits & 0x0FFF) == vlan) + break; + } + + /* + * If regindex is less than TXGBE_VLVF_ENTRIES, then we found the vlan + * in the VLVF. Else use the first empty VLVF register for this + * vlan id. + */ + if (regindex >= TXGBE_PSR_VLAN_SWC_ENTRIES) { + if (first_empty_slot) + regindex = first_empty_slot; + else { + ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, + "No space in VLVF.\n"); + regindex = TXGBE_ERR_NO_SPACE; + } + } + + return regindex; +} + +/** + * txgbe_set_vfta - Set VLAN filter table + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * @vind: VMDq output index that maps queue to VLAN id in VFVFB + * @vlan_on: boolean flag to turn on/off VLAN in VFVF + * + * Turn on/off specified VLAN in the VLAN filter table. + **/ +s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on) +{ + s32 regindex; + u32 bitindex; + u32 vfta; + u32 targetbit; + s32 ret_val = 0; + bool vfta_changed = false; + + DEBUGFUNC("\n"); + + if (vlan > 4095) + return TXGBE_ERR_PARAM; + + /* + * this is a 2 part operation - first the VFTA, then the + * VLVF and VLVFB if VT Mode is set + * We don't write the VFTA until we know the VLVF part succeeded. + */ + + /* Part 1 + * The VFTA is a bitstring made up of 128 32-bit registers + * that enable the particular VLAN id, much like the MTA: + * bits[11-5]: which register + * bits[4-0]: which bit in the register + */ + regindex = (vlan >> 5) & 0x7F; + bitindex = vlan & 0x1F; + targetbit = (1 << bitindex); + /* errata 5 */ + vfta = hw->mac.vft_shadow[regindex]; + if (vlan_on) { + if (!(vfta & targetbit)) { + vfta |= targetbit; + vfta_changed = true; + } + } else { + if ((vfta & targetbit)) { + vfta &= ~targetbit; + vfta_changed = true; + } + } + + /* Part 2 + * Call txgbe_set_vlvf to set VLVFB and VLVF + */ + ret_val = txgbe_set_vlvf(hw, vlan, vind, vlan_on, + &vfta_changed); + if (ret_val != 0) + return ret_val; + + if (vfta_changed) + wr32(hw, TXGBE_PSR_VLAN_TBL(regindex), vfta); + /* errata 5 */ + hw->mac.vft_shadow[regindex] = vfta; + return 0; +} + +/** + * txgbe_set_vlvf - Set VLAN Pool Filter + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * @vind: VMDq output index that maps queue to VLAN id in VFVFB + * @vlan_on: boolean flag to turn on/off VLAN in VFVF + * @vfta_changed: pointer to boolean flag which indicates whether VFTA + * should be changed + * + * Turn on/off specified bit in VLVF table. + **/ +s32 txgbe_set_vlvf(struct txgbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, bool *vfta_changed) +{ + u32 vt; + + DEBUGFUNC("\n"); + + if (vlan > 4095) + return TXGBE_ERR_PARAM; + + /* If VT Mode is set + * Either vlan_on + * make sure the vlan is in VLVF + * set the vind bit in the matching VLVFB + * Or !vlan_on + * clear the pool bit and possibly the vind + */ + vt = rd32(hw, TXGBE_CFG_PORT_CTL); + if (vt & TXGBE_CFG_PORT_CTL_NUM_VT_MASK) { + s32 vlvf_index; + u32 bits; + + vlvf_index = txgbe_find_vlvf_slot(hw, vlan); + if (vlvf_index < 0) + return vlvf_index; + + wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, vlvf_index); + if (vlan_on) { + /* set the pool bit */ + if (vind < 32) { + bits = rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_L); + bits |= (1 << vind); + wr32(hw, + TXGBE_PSR_VLAN_SWC_VM_L, + bits); + } else { + bits = rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_H); + bits |= (1 << (vind - 32)); + wr32(hw, + TXGBE_PSR_VLAN_SWC_VM_H, + bits); + } + } else { + /* clear the pool bit */ + if (vind < 32) { + bits = rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_L); + bits &= ~(1 << vind); + wr32(hw, + TXGBE_PSR_VLAN_SWC_VM_L, + bits); + bits |= rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_H); + } else { + bits = rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_H); + bits &= ~(1 << (vind - 32)); + wr32(hw, + TXGBE_PSR_VLAN_SWC_VM_H, + bits); + bits |= rd32(hw, + TXGBE_PSR_VLAN_SWC_VM_L); + } + } + + /* + * If there are still bits set in the VLVFB registers + * for the VLAN ID indicated we need to see if the + * caller is requesting that we clear the VFTA entry bit. + * If the caller has requested that we clear the VFTA + * entry bit but there are still pools/VFs using this VLAN + * ID entry then ignore the request. We're not worried + * about the case where we're turning the VFTA VLAN ID + * entry bit on, only when requested to turn it off as + * there may be multiple pools and/or VFs using the + * VLAN ID entry. In that case we cannot clear the + * VFTA bit until all pools/VFs using that VLAN ID have also + * been cleared. This will be indicated by "bits" being + * zero. + */ + if (bits) { + wr32(hw, TXGBE_PSR_VLAN_SWC, + (TXGBE_PSR_VLAN_SWC_VIEN | vlan)); + if ((!vlan_on) && (vfta_changed != NULL)) { + /* someone wants to clear the vfta entry + * but some pools/VFs are still using it. + * Ignore it. */ + *vfta_changed = false; + } + } else + wr32(hw, TXGBE_PSR_VLAN_SWC, 0); + } + + return 0; +} + +/** + * txgbe_clear_vfta - Clear VLAN filter table + * @hw: pointer to hardware structure + * + * Clears the VLAN filer table, and the VMDq index associated with the filter + **/ +s32 txgbe_clear_vfta(struct txgbe_hw *hw) +{ + u32 offset; + + DEBUGFUNC("\n"); + + for (offset = 0; offset < hw->mac.vft_size; offset++) { + wr32(hw, TXGBE_PSR_VLAN_TBL(offset), 0); + /* errata 5 */ + hw->mac.vft_shadow[offset] = 0; + } + + for (offset = 0; offset < TXGBE_PSR_VLAN_SWC_ENTRIES; offset++) { + wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, offset); + wr32(hw, TXGBE_PSR_VLAN_SWC, 0); + wr32(hw, TXGBE_PSR_VLAN_SWC_VM_L, 0); + wr32(hw, TXGBE_PSR_VLAN_SWC_VM_H, 0); + } + + return 0; +} + +/** + * txgbe_get_wwn_prefix - Get alternative WWNN/WWPN prefix from + * the EEPROM + * @hw: pointer to hardware structure + * @wwnn_prefix: the alternative WWNN prefix + * @wwpn_prefix: the alternative WWPN prefix + * + * This function will read the EEPROM from the alternative SAN MAC address + * block to check the support for the alternative WWNN/WWPN prefix support. + **/ +s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix, + u16 *wwpn_prefix) +{ + u16 offset, caps; + u16 alt_san_mac_blk_offset; + + DEBUGFUNC("\n"); + + /* clear output first */ + *wwnn_prefix = 0xFFFF; + *wwpn_prefix = 0xFFFF; + + /* check if alternative SAN MAC is supported */ + offset = hw->eeprom.sw_region_offset + TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR; + if (TCALL(hw, eeprom.ops.read, offset, &alt_san_mac_blk_offset)) + goto wwn_prefix_err; + + if ((alt_san_mac_blk_offset == 0) || + (alt_san_mac_blk_offset == 0xFFFF)) + goto wwn_prefix_out; + + /* check capability in alternative san mac address block */ + offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET; + if (TCALL(hw, eeprom.ops.read, offset, &caps)) + goto wwn_prefix_err; + if (!(caps & TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN)) + goto wwn_prefix_out; + + /* get the corresponding prefix for WWNN/WWPN */ + offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET; + if (TCALL(hw, eeprom.ops.read, offset, wwnn_prefix)) { + ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", offset); + } + + offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET; + if (TCALL(hw, eeprom.ops.read, offset, wwpn_prefix)) + goto wwn_prefix_err; + +wwn_prefix_out: + return 0; + +wwn_prefix_err: + ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", offset); + return 0; +} + + +/** + * txgbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing + * @hw: pointer to hardware structure + * @enable: enable or disable switch for anti-spoofing + * @pf: Physical Function pool - do not enable anti-spoofing for the PF + * + **/ +void txgbe_set_mac_anti_spoofing(struct txgbe_hw *hw, bool enable, int pf) +{ + u64 pfvfspoof = 0; + + DEBUGFUNC("\n"); + + if (enable) { + /* + * The PF should be allowed to spoof so that it can support + * emulation mode NICs. Do not set the bits assigned to the PF + * Remaining pools belong to the PF so they do not need to have + * anti-spoofing enabled. + */ + pfvfspoof = (1 << pf) - 1; + wr32(hw, TXGBE_TDM_MAC_AS_L, + pfvfspoof & 0xffffffff); + wr32(hw, TXGBE_TDM_MAC_AS_H, pfvfspoof >> 32); + } else { + wr32(hw, TXGBE_TDM_MAC_AS_L, 0); + wr32(hw, TXGBE_TDM_MAC_AS_H, 0); + } +} + +/** + * txgbe_set_vlan_anti_spoofing - Enable/Disable VLAN anti-spoofing + * @hw: pointer to hardware structure + * @enable: enable or disable switch for VLAN anti-spoofing + * @vf: Virtual Function pool - VF Pool to set for VLAN anti-spoofing + * + **/ +void txgbe_set_vlan_anti_spoofing(struct txgbe_hw *hw, bool enable, int vf) +{ + u32 pfvfspoof; + + DEBUGFUNC("\n"); + + if (vf < 32) { + pfvfspoof = rd32(hw, TXGBE_TDM_VLAN_AS_L); + if (enable) + pfvfspoof |= (1 << vf); + else + pfvfspoof &= ~(1 << vf); + wr32(hw, TXGBE_TDM_VLAN_AS_L, pfvfspoof); + } else { + pfvfspoof = rd32(hw, TXGBE_TDM_VLAN_AS_H); + if (enable) + pfvfspoof |= (1 << (vf - 32)); + else + pfvfspoof &= ~(1 << (vf - 32)); + wr32(hw, TXGBE_TDM_VLAN_AS_H, pfvfspoof); + } +} + +/** + * txgbe_set_ethertype_anti_spoofing - Enable/Disable Ethertype anti-spoofing + * @hw: pointer to hardware structure + * @enable: enable or disable switch for Ethertype anti-spoofing + * @vf: Virtual Function pool - VF Pool to set for Ethertype anti-spoofing + * + **/ +void txgbe_set_ethertype_anti_spoofing(struct txgbe_hw *hw, + bool enable, int vf) +{ + u32 pfvfspoof; + + DEBUGFUNC("\n"); + + if (vf < 32) { + pfvfspoof = rd32(hw, TXGBE_TDM_ETYPE_AS_L); + if (enable) + pfvfspoof |= (1 << vf); + else + pfvfspoof &= ~(1 << vf); + wr32(hw, TXGBE_TDM_ETYPE_AS_L, pfvfspoof); + } else { + pfvfspoof = rd32(hw, TXGBE_TDM_ETYPE_AS_H); + if (enable) + pfvfspoof |= (1 << (vf - 32)); + else + pfvfspoof &= ~(1 << (vf - 32)); + wr32(hw, TXGBE_TDM_ETYPE_AS_H, pfvfspoof); + } +} + +/** + * txgbe_get_device_caps - Get additional device capabilities + * @hw: pointer to hardware structure + * @device_caps: the EEPROM word with the extra device capabilities + * + * This function will read the EEPROM location for the device capabilities, + * and return the word through device_caps. + **/ +s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps) +{ + DEBUGFUNC("\n"); + + TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS, device_caps); + + return 0; +} + +/** + * txgbe_calculate_checksum - Calculate checksum for buffer + * @buffer: pointer to EEPROM + * @length: size of EEPROM to calculate a checksum for + * Calculates the checksum for some buffer on a specified length. The + * checksum calculated is returned. + **/ +u8 txgbe_calculate_checksum(u8 *buffer, u32 length) +{ + u32 i; + u8 sum = 0; + + DEBUGFUNC("\n"); + + if (!buffer) + return 0; + + for (i = 0; i < length; i++) + sum += buffer[i]; + + return (u8) (0 - sum); +} + +/** + * txgbe_host_interface_command - Issue command to manageability block + * @hw: pointer to the HW structure + * @buffer: contains the command to write and where the return status will + * be placed + * @length: length of buffer, must be multiple of 4 bytes + * @timeout: time in ms to wait for command completion + * @return_data: read and return data from the buffer (true) or not (false) + * Needed because FW structures are big endian and decoding of + * these fields can be 8 bit or 16 bit based on command. Decoding + * is not easily understood without making a table of commands. + * So we will leave this up to the caller to read back the data + * in these cases. + * + * Communicates with the manageability block. On success return 0 + * else return TXGBE_ERR_HOST_INTERFACE_COMMAND. + **/ +s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer, + u32 length, u32 timeout, bool return_data) +{ + u32 hicr, i, bi; + u32 hdr_size = sizeof(struct txgbe_hic_hdr); + u16 buf_len; + u32 dword_len; + s32 status = 0; + u32 buf[64] = {}; + + DEBUGFUNC("\n"); + + if (length == 0 || length > TXGBE_HI_MAX_BLOCK_BYTE_LENGTH) { + DEBUGOUT1("Buffer length failure buffersize=%d.\n", length); + return TXGBE_ERR_HOST_INTERFACE_COMMAND; + } + + if (TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB) + != 0) { + return TXGBE_ERR_SWFW_SYNC; + } + + + /* Calculate length in DWORDs. We must be DWORD aligned */ + if ((length % (sizeof(u32))) != 0) { + DEBUGOUT("Buffer length failure, not aligned to dword"); + status = TXGBE_ERR_INVALID_ARGUMENT; + goto rel_out; + } + + dword_len = length >> 2; + + /* The device driver writes the relevant command block + * into the ram area. + */ + for (i = 0; i < dword_len; i++) { + if (txgbe_check_mng_access(hw)) { + wr32a(hw, TXGBE_MNG_MBOX, + i, TXGBE_CPU_TO_LE32(buffer[i])); + /* write flush */ + buf[i] = rd32a(hw, TXGBE_MNG_MBOX, i); + } else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + goto rel_out; + } + } + /* Setting this bit tells the ARC that a new command is pending. */ + if (txgbe_check_mng_access(hw)) + wr32m(hw, TXGBE_MNG_MBOX_CTL, + TXGBE_MNG_MBOX_CTL_SWRDY, TXGBE_MNG_MBOX_CTL_SWRDY); + else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + goto rel_out; + } + + for (i = 0; i < timeout; i++) { + if (txgbe_check_mng_access(hw)) { + hicr = rd32(hw, TXGBE_MNG_MBOX_CTL); + if ((hicr & TXGBE_MNG_MBOX_CTL_FWRDY)) + break; + } + msec_delay(1); + } + + /* Check command completion */ + if (timeout != 0 && i == timeout) { + ERROR_REPORT1(TXGBE_ERROR_CAUTION, + "Command has failed with no status valid.\n"); + + ERROR_REPORT1(TXGBE_ERROR_CAUTION, "write value:\n"); + for (i = 0; i < dword_len; i++) { + ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buffer[i]); + } + ERROR_REPORT1(TXGBE_ERROR_CAUTION, "read value:\n"); + for (i = 0; i < dword_len; i++) { + ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buf[i]); + } + + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + goto rel_out; + } + + if (!return_data) + goto rel_out; + + /* Calculate length in DWORDs */ + dword_len = hdr_size >> 2; + + /* first pull in the header so we know the buffer length */ + for (bi = 0; bi < dword_len; bi++) { + if (txgbe_check_mng_access(hw)) { + buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX, + bi); + TXGBE_LE32_TO_CPUS(&buffer[bi]); + } else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + goto rel_out; + } + } + + /* If there is any thing in data position pull it in */ + buf_len = ((struct txgbe_hic_hdr *)buffer)->buf_len; + if (buf_len == 0) + goto rel_out; + + if (length < buf_len + hdr_size) { + DEBUGOUT("Buffer not large enough for reply message.\n"); + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + goto rel_out; + } + + /* Calculate length in DWORDs, add 3 for odd lengths */ + dword_len = (buf_len + 3) >> 2; + + /* Pull in the rest of the buffer (bi is where we left off) */ + for (; bi <= dword_len; bi++) { + if (txgbe_check_mng_access(hw)) { + buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX, + bi); + TXGBE_LE32_TO_CPUS(&buffer[bi]); + } else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + goto rel_out; + } + } + +rel_out: + TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB); + return status; +} + +/** + * txgbe_set_fw_drv_ver - Sends driver version to firmware + * @hw: pointer to the HW structure + * @maj: driver version major number + * @min: driver version minor number + * @build: driver version build number + * @sub: driver version sub build number + * + * Sends driver version number to firmware through the manageability + * block. On success return 0 + * else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring + * semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails. + **/ +s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min, + u8 build, u8 sub) +{ + struct txgbe_hic_drv_info fw_cmd; + int i; + s32 ret_val = 0; + + DEBUGFUNC("\n"); + + fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO; + fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN; + fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + fw_cmd.port_num = (u8)hw->bus.func; + fw_cmd.ver_maj = maj; + fw_cmd.ver_min = min; + fw_cmd.ver_build = build; + fw_cmd.ver_sub = sub; + fw_cmd.hdr.checksum = 0; + fw_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&fw_cmd, + (FW_CEM_HDR_LEN + fw_cmd.hdr.buf_len)); + fw_cmd.pad = 0; + fw_cmd.pad2 = 0; + + for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) { + ret_val = txgbe_host_interface_command(hw, (u32 *)&fw_cmd, + sizeof(fw_cmd), + TXGBE_HI_COMMAND_TIMEOUT, + true); + if (ret_val != 0) + continue; + + if (fw_cmd.hdr.cmd_or_resp.ret_status == + FW_CEM_RESP_STATUS_SUCCESS) + ret_val = 0; + else + ret_val = TXGBE_ERR_HOST_INTERFACE_COMMAND; + + break; + } + + return ret_val; +} + +/** + * txgbe_reset_hostif - send reset cmd to fw + * @hw: pointer to hardware structure + * + * Sends reset cmd to firmware through the manageability + * block. On success return 0 + * else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring + * semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails. + **/ +s32 txgbe_reset_hostif(struct txgbe_hw *hw) +{ + struct txgbe_hic_reset reset_cmd; + int i; + s32 status = 0; + + DEBUGFUNC("\n"); + + reset_cmd.hdr.cmd = FW_RESET_CMD; + reset_cmd.hdr.buf_len = FW_RESET_LEN; + reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + reset_cmd.lan_id = hw->bus.lan_id; + reset_cmd.reset_type = (u16)hw->reset_type; + reset_cmd.hdr.checksum = 0; + reset_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&reset_cmd, + (FW_CEM_HDR_LEN + reset_cmd.hdr.buf_len)); + + for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) { + status = txgbe_host_interface_command(hw, (u32 *)&reset_cmd, + sizeof(reset_cmd), + TXGBE_HI_COMMAND_TIMEOUT, + true); + if (status != 0) + continue; + + if (reset_cmd.hdr.cmd_or_resp.ret_status == + FW_CEM_RESP_STATUS_SUCCESS) { + status = 0; + hw->link_status = TXGBE_LINK_STATUS_NONE; + } else + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + + break; + } + + return status; +} + +s32 txgbe_setup_mac_link_hostif(struct txgbe_hw *hw, u32 speed) +{ + struct txgbe_hic_phy_cfg cmd; + int i; + s32 status = 0; + + DEBUGFUNC("\n"); + + cmd.hdr.cmd = FW_SETUP_MAC_LINK_CMD; + cmd.hdr.buf_len = FW_SETUP_MAC_LINK_LEN; + cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + cmd.lan_id = hw->bus.lan_id; + cmd.phy_mode = 0; + cmd.phy_speed = (u16)speed; + cmd.hdr.checksum = 0; + cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&cmd, + (FW_CEM_HDR_LEN + cmd.hdr.buf_len)); + + for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) { + status = txgbe_host_interface_command(hw, (u32 *)&cmd, + sizeof(cmd), + TXGBE_HI_COMMAND_TIMEOUT, + true); + if (status != 0) + continue; + + if (cmd.hdr.cmd_or_resp.ret_status == + FW_CEM_RESP_STATUS_SUCCESS) + status = 0; + else + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + + break; + } + + return status; + +} + +u16 txgbe_crc16_ccitt(const u8 *buf, int size) +{ + u16 crc = 0; + int i; + while (--size >= 0) { + crc ^= (u16)*buf++ << 8; + for (i = 0; i < 8; i++) { + if (crc & 0x8000) + crc = crc << 1 ^ 0x1021; + else + crc <<= 1; + } + } + return crc; +} + +s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region, + const u8 *data, u32 size) +{ + struct txgbe_hic_upg_start start_cmd; + struct txgbe_hic_upg_write write_cmd; + struct txgbe_hic_upg_verify verify_cmd; + u32 offset; + s32 status = 0; + + DEBUGFUNC("\n"); + + start_cmd.hdr.cmd = FW_FLASH_UPGRADE_START_CMD; + start_cmd.hdr.buf_len = FW_FLASH_UPGRADE_START_LEN; + start_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + start_cmd.module_id = (u8)region; + start_cmd.hdr.checksum = 0; + start_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&start_cmd, + (FW_CEM_HDR_LEN + start_cmd.hdr.buf_len)); + start_cmd.pad2 = 0; + start_cmd.pad3 = 0; + + status = txgbe_host_interface_command(hw, (u32 *)&start_cmd, + sizeof(start_cmd), + TXGBE_HI_FLASH_ERASE_TIMEOUT, + true); + + if (start_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS) + status = 0; + else { + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + return status; + } + + for (offset = 0; offset < size;) { + write_cmd.hdr.cmd = FW_FLASH_UPGRADE_WRITE_CMD; + if (size - offset > 248) { + write_cmd.data_len = 248 / 4; + write_cmd.eof_flag = 0; + } else { + write_cmd.data_len = (u8)((size - offset) / 4); + write_cmd.eof_flag = 1; + } + memcpy((u8 *)write_cmd.data, &data[offset], write_cmd.data_len * 4); + write_cmd.hdr.buf_len = (write_cmd.data_len + 1) * 4; + write_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + write_cmd.check_sum = txgbe_crc16_ccitt((u8 *)write_cmd.data, + write_cmd.data_len * 4); + + status = txgbe_host_interface_command(hw, (u32 *)&write_cmd, + sizeof(write_cmd), + TXGBE_HI_FLASH_UPDATE_TIMEOUT, + true); + if (start_cmd.hdr.cmd_or_resp.ret_status == + FW_CEM_RESP_STATUS_SUCCESS) + status = 0; + else { + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + return status; + } + offset += write_cmd.data_len * 4; + } + + verify_cmd.hdr.cmd = FW_FLASH_UPGRADE_VERIFY_CMD; + verify_cmd.hdr.buf_len = FW_FLASH_UPGRADE_VERIFY_LEN; + verify_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED; + switch (region) { + case TXGBE_MODULE_EEPROM: + verify_cmd.action_flag = TXGBE_RELOAD_EEPROM; + break; + case TXGBE_MODULE_FIRMWARE: + verify_cmd.action_flag = TXGBE_RESET_FIRMWARE; + break; + case TXGBE_MODULE_HARDWARE: + verify_cmd.action_flag = TXGBE_RESET_LAN; + break; + default: + return status; + } + + verify_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&verify_cmd, + (FW_CEM_HDR_LEN + verify_cmd.hdr.buf_len)); + + status = txgbe_host_interface_command(hw, (u32 *)&verify_cmd, + sizeof(verify_cmd), + TXGBE_HI_FLASH_VERIFY_TIMEOUT, + true); + + if (verify_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS) + status = 0; + else { + status = TXGBE_ERR_HOST_INTERFACE_COMMAND; + } + return status; +} + +/** + * txgbe_set_rxpba - Initialize Rx packet buffer + * @hw: pointer to hardware structure + * @num_pb: number of packet buffers to allocate + * @headroom: reserve n KB of headroom + * @strategy: packet buffer allocation strategy + **/ +void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom, + int strategy) +{ + u32 pbsize = hw->mac.rx_pb_size; + int i = 0; + u32 rxpktsize, txpktsize, txpbthresh; + + DEBUGFUNC("\n"); + + /* Reserve headroom */ + pbsize -= headroom; + + if (!num_pb) + num_pb = 1; + + /* Divide remaining packet buffer space amongst the number of packet + * buffers requested using supplied strategy. + */ + switch (strategy) { + case PBA_STRATEGY_WEIGHTED: + /* txgbe_dcb_pba_80_48 strategy weight first half of packet + * buffer with 5/8 of the packet buffer space. + */ + rxpktsize = (pbsize * 5) / (num_pb * 4); + pbsize -= rxpktsize * (num_pb / 2); + rxpktsize <<= TXGBE_RDB_PB_SZ_SHIFT; + for (; i < (num_pb / 2); i++) + wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize); + /* fall through */ + /* Fall through to configure remaining packet buffers */ + case PBA_STRATEGY_EQUAL: + rxpktsize = (pbsize / (num_pb - i)) << TXGBE_RDB_PB_SZ_SHIFT; + for (; i < num_pb; i++) + wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize); + break; + default: + break; + } + + /* Only support an equally distributed Tx packet buffer strategy. */ + txpktsize = TXGBE_TDB_PB_SZ_MAX / num_pb; + txpbthresh = (txpktsize / 1024) - TXGBE_TXPKT_SIZE_MAX; + for (i = 0; i < num_pb; i++) { + wr32(hw, TXGBE_TDB_PB_SZ(i), txpktsize); + wr32(hw, TXGBE_TDM_PB_THRE(i), txpbthresh); + } + + /* Clear unused TCs, if any, to zero buffer size*/ + for (; i < TXGBE_MAX_PB; i++) { + wr32(hw, TXGBE_RDB_PB_SZ(i), 0); + wr32(hw, TXGBE_TDB_PB_SZ(i), 0); + wr32(hw, TXGBE_TDM_PB_THRE(i), 0); + } +} + +STATIC const u8 txgbe_emc_temp_data[4] = { + TXGBE_EMC_INTERNAL_DATA, + TXGBE_EMC_DIODE1_DATA, + TXGBE_EMC_DIODE2_DATA, + TXGBE_EMC_DIODE3_DATA +}; +STATIC const u8 txgbe_emc_therm_limit[4] = { + TXGBE_EMC_INTERNAL_THERM_LIMIT, + TXGBE_EMC_DIODE1_THERM_LIMIT, + TXGBE_EMC_DIODE2_THERM_LIMIT, + TXGBE_EMC_DIODE3_THERM_LIMIT +}; + +/** + * txgbe_get_thermal_sensor_data - Gathers thermal sensor data + * @hw: pointer to hardware structure + * @data: pointer to the thermal sensor data structure + * + * algorithm: + * T = (-4.8380E+01)N^0 + (3.1020E-01)N^1 + (-1.8201E-04)N^2 + + (8.1542E-08)N^3 + (-1.6743E-11)N^4 + * algorithm with 5% more deviation, easy for implementation + * T = (-50)N^0 + (0.31)N^1 + (-0.0002)N^2 + (0.0000001)N^3 + * + * Returns the thermal sensor data structure + **/ +s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw) +{ + s64 tsv; + int i = 0; + struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data; + + DEBUGFUNC("\n"); + + /* Only support thermal sensors attached to physical port 0 */ + if (hw->bus.lan_id) + return TXGBE_NOT_IMPLEMENTED; + + tsv = (s64)(rd32(hw, TXGBE_TS_ST) & + TXGBE_TS_ST_DATA_OUT_MASK); + + tsv = tsv < 1200 ? tsv : 1200; + tsv = -(48380 << 8) / 1000 + + tsv * (31020 << 8) / 100000 + - tsv * tsv * (18201 << 8) / 100000000 + + tsv * tsv * tsv * (81542 << 8) / 1000000000000 + - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000; + tsv >>= 8; + + data->sensor.temp = (s16)tsv; + + for (i = 0; i < 100 ; i++) { + tsv = (s64)rd32(hw, TXGBE_TS_ST); + if (tsv >> 16 == 0x1) { + tsv = tsv & TXGBE_TS_ST_DATA_OUT_MASK; + tsv = tsv < 1200 ? tsv : 1200; + tsv = -(48380 << 8) / 1000 + + tsv * (31020 << 8) / 100000 + - tsv * tsv * (18201 << 8) / 100000000 + + tsv * tsv * tsv * (81542 << 8) / 1000000000000 + - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000; + tsv >>= 8; + + data->sensor.temp = (s16)tsv; + break; + } else { + msleep(1); + continue; + } + } + + return 0; +} + +/** + * txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds + * @hw: pointer to hardware structure + * + * Inits the thermal sensor thresholds according to the NVM map + * and save off the threshold and location values into mac.thermal_sensor_data + **/ +s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw) +{ + s32 status = 0; + + struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data; + + DEBUGFUNC("\n"); + + memset(data, 0, sizeof(struct txgbe_thermal_sensor_data)); + + /* Only support thermal sensors attached to SP physical port 0 */ + if (hw->bus.lan_id) + return TXGBE_NOT_IMPLEMENTED; + + wr32(hw, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD); + wr32(hw, TXGBE_TS_INT_EN, + TXGBE_TS_INT_EN_ALARM_INT_EN | TXGBE_TS_INT_EN_DALARM_INT_EN); + wr32(hw, TXGBE_TS_EN, TXGBE_TS_EN_ENA); + + + data->sensor.alarm_thresh = 100; + wr32(hw, TXGBE_TS_ALARM_THRE, 677); + data->sensor.dalarm_thresh = 90; + wr32(hw, TXGBE_TS_DALARM_THRE, 614); + + return status; +} + +void txgbe_disable_rx(struct txgbe_hw *hw) +{ + u32 pfdtxgswc; + u32 rxctrl; + + DEBUGFUNC("\n"); + + rxctrl = rd32(hw, TXGBE_RDB_PB_CTL); + if (rxctrl & TXGBE_RDB_PB_CTL_RXEN) { + pfdtxgswc = rd32(hw, TXGBE_PSR_CTL); + if (pfdtxgswc & TXGBE_PSR_CTL_SW_EN) { + pfdtxgswc &= ~TXGBE_PSR_CTL_SW_EN; + wr32(hw, TXGBE_PSR_CTL, pfdtxgswc); + hw->mac.set_lben = true; + } else { + hw->mac.set_lben = false; + } + rxctrl &= ~TXGBE_RDB_PB_CTL_RXEN; + wr32(hw, TXGBE_RDB_PB_CTL, rxctrl); + /* errata 14 */ + if (hw->revision_id == TXGBE_SP_MPW) { + do { + do { + if (rd32m(hw, + TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_DISABLED) == 1) + break; + msleep(10); + } while (1); + if (rd32m(hw, TXGBE_RDB_TXSWERR, + TXGBE_RDB_TXSWERR_TB_FREE) == 0x143) + break; + else { + wr32m(hw, + TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_RXEN, + TXGBE_RDB_PB_CTL_RXEN); + wr32m(hw, + TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_RXEN, + ~TXGBE_RDB_PB_CTL_RXEN); + + } + } while (1); + } + + if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) || + ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) { + /* disable mac receiver */ + wr32m(hw, TXGBE_MAC_RX_CFG, + TXGBE_MAC_RX_CFG_RE, 0); + } + } +} + +void txgbe_enable_rx(struct txgbe_hw *hw) +{ + u32 pfdtxgswc; + + DEBUGFUNC("\n"); + + /* enable mac receiver */ + wr32m(hw, TXGBE_MAC_RX_CFG, + TXGBE_MAC_RX_CFG_RE, TXGBE_MAC_RX_CFG_RE); + + wr32m(hw, TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN); + + if (hw->mac.set_lben) { + pfdtxgswc = rd32(hw, TXGBE_PSR_CTL); + pfdtxgswc |= TXGBE_PSR_CTL_SW_EN; + wr32(hw, TXGBE_PSR_CTL, pfdtxgswc); + hw->mac.set_lben = false; + } +} + +/** + * txgbe_mng_present - returns true when management capability is present + * @hw: pointer to hardware structure + */ +bool txgbe_mng_present(struct txgbe_hw *hw) +{ + u32 fwsm; + + fwsm = rd32(hw, TXGBE_MIS_ST); + return fwsm & TXGBE_MIS_ST_MNG_INIT_DN; +} + +bool txgbe_check_mng_access(struct txgbe_hw *hw) +{ + bool ret = false; + u32 rst_delay; + u32 i; + + struct txgbe_adapter *adapter = hw->back; + if (!txgbe_mng_present(hw)) + return false; + if (adapter->hw.revision_id != TXGBE_SP_MPW) + return true; + if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED)) + return true; + + rst_delay = (rd32(&adapter->hw, TXGBE_MIS_RST_ST) & + TXGBE_MIS_RST_ST_RST_INIT) >> + TXGBE_MIS_RST_ST_RST_INI_SHIFT; + for (i = 0; i < rst_delay + 2; i++) { + if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED)) { + ret = true; + break; + } + msleep(100); + } + return ret; +} + +/** + * txgbe_setup_mac_link_multispeed_fiber - Set MAC link speed + * @hw: pointer to hardware structure + * @speed: new link speed + * @autoneg_wait_to_complete: true when waiting for completion is needed + * + * Set the link speed in the MAC and/or PHY register and restarts link. + **/ +s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete) +{ + u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN; + u32 highest_link_speed = TXGBE_LINK_SPEED_UNKNOWN; + s32 status = 0; + u32 speedcnt = 0; + u32 i = 0; + bool autoneg, link_up = false; + + DEBUGFUNC("\n"); + + /* Mask off requested but non-supported speeds */ + status = TCALL(hw, mac.ops.get_link_capabilities, + &link_speed, &autoneg); + if (status != 0) + return status; + + speed &= link_speed; + + /* Try each speed one by one, highest priority first. We do this in + * software because 10Gb fiber doesn't support speed autonegotiation. + */ + if (speed & TXGBE_LINK_SPEED_10GB_FULL) { + speedcnt++; + highest_link_speed = TXGBE_LINK_SPEED_10GB_FULL; + + /* If we already have link at this speed, just jump out */ + status = TCALL(hw, mac.ops.check_link, + &link_speed, &link_up, false); + if (status != 0) + return status; + + if ((link_speed == TXGBE_LINK_SPEED_10GB_FULL) && link_up) + goto out; + + /* Allow module to change analog characteristics (1G->10G) */ + msec_delay(40); + + status = TCALL(hw, mac.ops.setup_mac_link, + TXGBE_LINK_SPEED_10GB_FULL, + autoneg_wait_to_complete); + if (status != 0) + return status; + + /* Flap the Tx laser if it has not already been done */ + TCALL(hw, mac.ops.flap_tx_laser); + + /* Wait for the controller to acquire link. Per IEEE 802.3ap, + * Section 73.10.2, we may have to wait up to 500ms if KR is + * attempted. sapphire uses the same timing for 10g SFI. + */ + for (i = 0; i < 5; i++) { + /* Wait for the link partner to also set speed */ + msec_delay(100); + + /* If we have link, just jump out */ + status = TCALL(hw, mac.ops.check_link, + &link_speed, &link_up, false); + if (status != 0) + return status; + + if (link_up) + goto out; + } + } + + if (speed & TXGBE_LINK_SPEED_1GB_FULL) { + speedcnt++; + if (highest_link_speed == TXGBE_LINK_SPEED_UNKNOWN) + highest_link_speed = TXGBE_LINK_SPEED_1GB_FULL; + + /* If we already have link at this speed, just jump out */ + status = TCALL(hw, mac.ops.check_link, + &link_speed, &link_up, false); + if (status != 0) + return status; + + if ((link_speed == TXGBE_LINK_SPEED_1GB_FULL) && link_up) + goto out; + + /* Allow module to change analog characteristics (10G->1G) */ + msec_delay(40); + + status = TCALL(hw, mac.ops.setup_mac_link, + TXGBE_LINK_SPEED_1GB_FULL, + autoneg_wait_to_complete); + if (status != 0) + return status; + + /* Flap the Tx laser if it has not already been done */ + TCALL(hw, mac.ops.flap_tx_laser); + + /* Wait for the link partner to also set speed */ + msec_delay(100); + + /* If we have link, just jump out */ + status = TCALL(hw, mac.ops.check_link, + &link_speed, &link_up, false); + if (status != 0) + return status; + + if (link_up) + goto out; + } + + /* We didn't get link. Configure back to the highest speed we tried, + * (if there was more than one). We call ourselves back with just the + * single highest speed that the user requested. + */ + if (speedcnt > 1) + status = txgbe_setup_mac_link_multispeed_fiber(hw, + highest_link_speed, + autoneg_wait_to_complete); + +out: + /* Set autoneg_advertised value based on input link speed */ + hw->phy.autoneg_advertised = 0; + + if (speed & TXGBE_LINK_SPEED_10GB_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL; + + if (speed & TXGBE_LINK_SPEED_1GB_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL; + + return status; +} + +int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit) +{ + u32 i = 0; + u32 reg = 0; + int err = 0; + /* if there's flash existing */ + if (!(rd32(hw, TXGBE_SPI_STATUS) & + TXGBE_SPI_STATUS_FLASH_BYPASS)) { + /* wait hw load flash done */ + for (i = 0; i < TXGBE_MAX_FLASH_LOAD_POLL_TIME; i++) { + reg = rd32(hw, TXGBE_SPI_ILDR_STATUS); + if (!(reg & check_bit)) { + /* done */ + break; + } + msleep(200); + } + if (i == TXGBE_MAX_FLASH_LOAD_POLL_TIME) { + err = TXGBE_ERR_FLASH_LOADING_FAILED; + } + } + return err; +} + +/* The txgbe_ptype_lookup is used to convert from the 8-bit ptype in the + * hardware to a bit-field that can be used by SW to more easily determine the + * packet type. + * + * Macros are used to shorten the table lines and make this table human + * readable. + * + * We store the PTYPE in the top byte of the bit field - this is just so that + * we can check that the table doesn't have a row missing, as the index into + * the table should be the PTYPE. + * + * Typical work flow: + * + * IF NOT txgbe_ptype_lookup[ptype].known + * THEN + * Packet is unknown + * ELSE IF txgbe_ptype_lookup[ptype].mac == TXGBE_DEC_PTYPE_MAC_IP + * Use the rest of the fields to look at the tunnels, inner protocols, etc + * ELSE + * Use the enum txgbe_l2_ptypes to decode the packet type + * ENDIF + */ + +/* macro to make the table lines short */ +#define TXGBE_PTT(ptype, mac, ip, etype, eip, proto, layer)\ + { ptype, \ + 1, \ + /* mac */ TXGBE_DEC_PTYPE_MAC_##mac, \ + /* ip */ TXGBE_DEC_PTYPE_IP_##ip, \ + /* etype */ TXGBE_DEC_PTYPE_ETYPE_##etype, \ + /* eip */ TXGBE_DEC_PTYPE_IP_##eip, \ + /* proto */ TXGBE_DEC_PTYPE_PROT_##proto, \ + /* layer */ TXGBE_DEC_PTYPE_LAYER_##layer } + +#define TXGBE_UKN(ptype) \ + { ptype, 0, 0, 0, 0, 0, 0, 0 } + +/* Lookup table mapping the HW PTYPE to the bit field for decoding */ +/* for ((pt=0;pt<256;pt++)); do printf "macro(0x%02X),\n" $pt; done */ +txgbe_dptype txgbe_ptype_lookup[256] = { + TXGBE_UKN(0x00), + TXGBE_UKN(0x01), + TXGBE_UKN(0x02), + TXGBE_UKN(0x03), + TXGBE_UKN(0x04), + TXGBE_UKN(0x05), + TXGBE_UKN(0x06), + TXGBE_UKN(0x07), + TXGBE_UKN(0x08), + TXGBE_UKN(0x09), + TXGBE_UKN(0x0A), + TXGBE_UKN(0x0B), + TXGBE_UKN(0x0C), + TXGBE_UKN(0x0D), + TXGBE_UKN(0x0E), + TXGBE_UKN(0x0F), + + /* L2: mac */ + TXGBE_UKN(0x10), + TXGBE_PTT(0x11, L2, NONE, NONE, NONE, NONE, PAY2), + TXGBE_PTT(0x12, L2, NONE, NONE, NONE, TS, PAY2), + TXGBE_PTT(0x13, L2, NONE, NONE, NONE, NONE, PAY2), + TXGBE_PTT(0x14, L2, NONE, NONE, NONE, NONE, PAY2), + TXGBE_PTT(0x15, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x16, L2, NONE, NONE, NONE, NONE, PAY2), + TXGBE_PTT(0x17, L2, NONE, NONE, NONE, NONE, NONE), + + /* L2: ethertype filter */ + TXGBE_PTT(0x18, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x19, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1A, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1B, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1C, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1D, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1E, L2, NONE, NONE, NONE, NONE, NONE), + TXGBE_PTT(0x1F, L2, NONE, NONE, NONE, NONE, NONE), + + /* L3: ip non-tunnel */ + TXGBE_UKN(0x20), + TXGBE_PTT(0x21, IP, FGV4, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x22, IP, IPV4, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x23, IP, IPV4, NONE, NONE, UDP, PAY4), + TXGBE_PTT(0x24, IP, IPV4, NONE, NONE, TCP, PAY4), + TXGBE_PTT(0x25, IP, IPV4, NONE, NONE, SCTP, PAY4), + TXGBE_UKN(0x26), + TXGBE_UKN(0x27), + TXGBE_UKN(0x28), + TXGBE_PTT(0x29, IP, FGV6, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x2A, IP, IPV6, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x2B, IP, IPV6, NONE, NONE, UDP, PAY3), + TXGBE_PTT(0x2C, IP, IPV6, NONE, NONE, TCP, PAY4), + TXGBE_PTT(0x2D, IP, IPV6, NONE, NONE, SCTP, PAY4), + TXGBE_UKN(0x2E), + TXGBE_UKN(0x2F), + + /* L2: fcoe */ + TXGBE_PTT(0x30, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x31, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x32, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x33, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x34, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_UKN(0x35), + TXGBE_UKN(0x36), + TXGBE_UKN(0x37), + TXGBE_PTT(0x38, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x39, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x3A, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x3B, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_PTT(0x3C, FCOE, NONE, NONE, NONE, NONE, PAY3), + TXGBE_UKN(0x3D), + TXGBE_UKN(0x3E), + TXGBE_UKN(0x3F), + + TXGBE_UKN(0x40), + TXGBE_UKN(0x41), + TXGBE_UKN(0x42), + TXGBE_UKN(0x43), + TXGBE_UKN(0x44), + TXGBE_UKN(0x45), + TXGBE_UKN(0x46), + TXGBE_UKN(0x47), + TXGBE_UKN(0x48), + TXGBE_UKN(0x49), + TXGBE_UKN(0x4A), + TXGBE_UKN(0x4B), + TXGBE_UKN(0x4C), + TXGBE_UKN(0x4D), + TXGBE_UKN(0x4E), + TXGBE_UKN(0x4F), + TXGBE_UKN(0x50), + TXGBE_UKN(0x51), + TXGBE_UKN(0x52), + TXGBE_UKN(0x53), + TXGBE_UKN(0x54), + TXGBE_UKN(0x55), + TXGBE_UKN(0x56), + TXGBE_UKN(0x57), + TXGBE_UKN(0x58), + TXGBE_UKN(0x59), + TXGBE_UKN(0x5A), + TXGBE_UKN(0x5B), + TXGBE_UKN(0x5C), + TXGBE_UKN(0x5D), + TXGBE_UKN(0x5E), + TXGBE_UKN(0x5F), + TXGBE_UKN(0x60), + TXGBE_UKN(0x61), + TXGBE_UKN(0x62), + TXGBE_UKN(0x63), + TXGBE_UKN(0x64), + TXGBE_UKN(0x65), + TXGBE_UKN(0x66), + TXGBE_UKN(0x67), + TXGBE_UKN(0x68), + TXGBE_UKN(0x69), + TXGBE_UKN(0x6A), + TXGBE_UKN(0x6B), + TXGBE_UKN(0x6C), + TXGBE_UKN(0x6D), + TXGBE_UKN(0x6E), + TXGBE_UKN(0x6F), + TXGBE_UKN(0x70), + TXGBE_UKN(0x71), + TXGBE_UKN(0x72), + TXGBE_UKN(0x73), + TXGBE_UKN(0x74), + TXGBE_UKN(0x75), + TXGBE_UKN(0x76), + TXGBE_UKN(0x77), + TXGBE_UKN(0x78), + TXGBE_UKN(0x79), + TXGBE_UKN(0x7A), + TXGBE_UKN(0x7B), + TXGBE_UKN(0x7C), + TXGBE_UKN(0x7D), + TXGBE_UKN(0x7E), + TXGBE_UKN(0x7F), + + /* IPv4 --> IPv4/IPv6 */ + TXGBE_UKN(0x80), + TXGBE_PTT(0x81, IP, IPV4, IPIP, FGV4, NONE, PAY3), + TXGBE_PTT(0x82, IP, IPV4, IPIP, IPV4, NONE, PAY3), + TXGBE_PTT(0x83, IP, IPV4, IPIP, IPV4, UDP, PAY4), + TXGBE_PTT(0x84, IP, IPV4, IPIP, IPV4, TCP, PAY4), + TXGBE_PTT(0x85, IP, IPV4, IPIP, IPV4, SCTP, PAY4), + TXGBE_UKN(0x86), + TXGBE_UKN(0x87), + TXGBE_UKN(0x88), + TXGBE_PTT(0x89, IP, IPV4, IPIP, FGV6, NONE, PAY3), + TXGBE_PTT(0x8A, IP, IPV4, IPIP, IPV6, NONE, PAY3), + TXGBE_PTT(0x8B, IP, IPV4, IPIP, IPV6, UDP, PAY4), + TXGBE_PTT(0x8C, IP, IPV4, IPIP, IPV6, TCP, PAY4), + TXGBE_PTT(0x8D, IP, IPV4, IPIP, IPV6, SCTP, PAY4), + TXGBE_UKN(0x8E), + TXGBE_UKN(0x8F), + + /* IPv4 --> GRE/NAT --> NONE/IPv4/IPv6 */ + TXGBE_PTT(0x90, IP, IPV4, IG, NONE, NONE, PAY3), + TXGBE_PTT(0x91, IP, IPV4, IG, FGV4, NONE, PAY3), + TXGBE_PTT(0x92, IP, IPV4, IG, IPV4, NONE, PAY3), + TXGBE_PTT(0x93, IP, IPV4, IG, IPV4, UDP, PAY4), + TXGBE_PTT(0x94, IP, IPV4, IG, IPV4, TCP, PAY4), + TXGBE_PTT(0x95, IP, IPV4, IG, IPV4, SCTP, PAY4), + TXGBE_UKN(0x96), + TXGBE_UKN(0x97), + TXGBE_UKN(0x98), + TXGBE_PTT(0x99, IP, IPV4, IG, FGV6, NONE, PAY3), + TXGBE_PTT(0x9A, IP, IPV4, IG, IPV6, NONE, PAY3), + TXGBE_PTT(0x9B, IP, IPV4, IG, IPV6, UDP, PAY4), + TXGBE_PTT(0x9C, IP, IPV4, IG, IPV6, TCP, PAY4), + TXGBE_PTT(0x9D, IP, IPV4, IG, IPV6, SCTP, PAY4), + TXGBE_UKN(0x9E), + TXGBE_UKN(0x9F), + + /* IPv4 --> GRE/NAT --> MAC --> NONE/IPv4/IPv6 */ + TXGBE_PTT(0xA0, IP, IPV4, IGM, NONE, NONE, PAY3), + TXGBE_PTT(0xA1, IP, IPV4, IGM, FGV4, NONE, PAY3), + TXGBE_PTT(0xA2, IP, IPV4, IGM, IPV4, NONE, PAY3), + TXGBE_PTT(0xA3, IP, IPV4, IGM, IPV4, UDP, PAY4), + TXGBE_PTT(0xA4, IP, IPV4, IGM, IPV4, TCP, PAY4), + TXGBE_PTT(0xA5, IP, IPV4, IGM, IPV4, SCTP, PAY4), + TXGBE_UKN(0xA6), + TXGBE_UKN(0xA7), + TXGBE_UKN(0xA8), + TXGBE_PTT(0xA9, IP, IPV4, IGM, FGV6, NONE, PAY3), + TXGBE_PTT(0xAA, IP, IPV4, IGM, IPV6, NONE, PAY3), + TXGBE_PTT(0xAB, IP, IPV4, IGM, IPV6, UDP, PAY4), + TXGBE_PTT(0xAC, IP, IPV4, IGM, IPV6, TCP, PAY4), + TXGBE_PTT(0xAD, IP, IPV4, IGM, IPV6, SCTP, PAY4), + TXGBE_UKN(0xAE), + TXGBE_UKN(0xAF), + + /* IPv4 --> GRE/NAT --> MAC+VLAN --> NONE/IPv4/IPv6 */ + TXGBE_PTT(0xB0, IP, IPV4, IGMV, NONE, NONE, PAY3), + TXGBE_PTT(0xB1, IP, IPV4, IGMV, FGV4, NONE, PAY3), + TXGBE_PTT(0xB2, IP, IPV4, IGMV, IPV4, NONE, PAY3), + TXGBE_PTT(0xB3, IP, IPV4, IGMV, IPV4, UDP, PAY4), + TXGBE_PTT(0xB4, IP, IPV4, IGMV, IPV4, TCP, PAY4), + TXGBE_PTT(0xB5, IP, IPV4, IGMV, IPV4, SCTP, PAY4), + TXGBE_UKN(0xB6), + TXGBE_UKN(0xB7), + TXGBE_UKN(0xB8), + TXGBE_PTT(0xB9, IP, IPV4, IGMV, FGV6, NONE, PAY3), + TXGBE_PTT(0xBA, IP, IPV4, IGMV, IPV6, NONE, PAY3), + TXGBE_PTT(0xBB, IP, IPV4, IGMV, IPV6, UDP, PAY4), + TXGBE_PTT(0xBC, IP, IPV4, IGMV, IPV6, TCP, PAY4), + TXGBE_PTT(0xBD, IP, IPV4, IGMV, IPV6, SCTP, PAY4), + TXGBE_UKN(0xBE), + TXGBE_UKN(0xBF), + + /* IPv6 --> IPv4/IPv6 */ + TXGBE_UKN(0xC0), + TXGBE_PTT(0xC1, IP, IPV6, IPIP, FGV4, NONE, PAY3), + TXGBE_PTT(0xC2, IP, IPV6, IPIP, IPV4, NONE, PAY3), + TXGBE_PTT(0xC3, IP, IPV6, IPIP, IPV4, UDP, PAY4), + TXGBE_PTT(0xC4, IP, IPV6, IPIP, IPV4, TCP, PAY4), + TXGBE_PTT(0xC5, IP, IPV6, IPIP, IPV4, SCTP, PAY4), + TXGBE_UKN(0xC6), + TXGBE_UKN(0xC7), + TXGBE_UKN(0xC8), + TXGBE_PTT(0xC9, IP, IPV6, IPIP, FGV6, NONE, PAY3), + TXGBE_PTT(0xCA, IP, IPV6, IPIP, IPV6, NONE, PAY3), + TXGBE_PTT(0xCB, IP, IPV6, IPIP, IPV6, UDP, PAY4), + TXGBE_PTT(0xCC, IP, IPV6, IPIP, IPV6, TCP, PAY4), + TXGBE_PTT(0xCD, IP, IPV6, IPIP, IPV6, SCTP, PAY4), + TXGBE_UKN(0xCE), + TXGBE_UKN(0xCF), + + /* IPv6 --> GRE/NAT -> NONE/IPv4/IPv6 */ + TXGBE_PTT(0xD0, IP, IPV6, IG, NONE, NONE, PAY3), + TXGBE_PTT(0xD1, IP, IPV6, IG, FGV4, NONE, PAY3), + TXGBE_PTT(0xD2, IP, IPV6, IG, IPV4, NONE, PAY3), + TXGBE_PTT(0xD3, IP, IPV6, IG, IPV4, UDP, PAY4), + TXGBE_PTT(0xD4, IP, IPV6, IG, IPV4, TCP, PAY4), + TXGBE_PTT(0xD5, IP, IPV6, IG, IPV4, SCTP, PAY4), + TXGBE_UKN(0xD6), + TXGBE_UKN(0xD7), + TXGBE_UKN(0xD8), + TXGBE_PTT(0xD9, IP, IPV6, IG, FGV6, NONE, PAY3), + TXGBE_PTT(0xDA, IP, IPV6, IG, IPV6, NONE, PAY3), + TXGBE_PTT(0xDB, IP, IPV6, IG, IPV6, UDP, PAY4), + TXGBE_PTT(0xDC, IP, IPV6, IG, IPV6, TCP, PAY4), + TXGBE_PTT(0xDD, IP, IPV6, IG, IPV6, SCTP, PAY4), + TXGBE_UKN(0xDE), + TXGBE_UKN(0xDF), + + /* IPv6 --> GRE/NAT -> MAC -> NONE/IPv4/IPv6 */ + TXGBE_PTT(0xE0, IP, IPV6, IGM, NONE, NONE, PAY3), + TXGBE_PTT(0xE1, IP, IPV6, IGM, FGV4, NONE, PAY3), + TXGBE_PTT(0xE2, IP, IPV6, IGM, IPV4, NONE, PAY3), + TXGBE_PTT(0xE3, IP, IPV6, IGM, IPV4, UDP, PAY4), + TXGBE_PTT(0xE4, IP, IPV6, IGM, IPV4, TCP, PAY4), + TXGBE_PTT(0xE5, IP, IPV6, IGM, IPV4, SCTP, PAY4), + TXGBE_UKN(0xE6), + TXGBE_UKN(0xE7), + TXGBE_UKN(0xE8), + TXGBE_PTT(0xE9, IP, IPV6, IGM, FGV6, NONE, PAY3), + TXGBE_PTT(0xEA, IP, IPV6, IGM, IPV6, NONE, PAY3), + TXGBE_PTT(0xEB, IP, IPV6, IGM, IPV6, UDP, PAY4), + TXGBE_PTT(0xEC, IP, IPV6, IGM, IPV6, TCP, PAY4), + TXGBE_PTT(0xED, IP, IPV6, IGM, IPV6, SCTP, PAY4), + TXGBE_UKN(0xEE), + TXGBE_UKN(0xEF), + + /* IPv6 --> GRE/NAT -> MAC--> NONE/IPv */ + TXGBE_PTT(0xF0, IP, IPV6, IGMV, NONE, NONE, PAY3), + TXGBE_PTT(0xF1, IP, IPV6, IGMV, FGV4, NONE, PAY3), + TXGBE_PTT(0xF2, IP, IPV6, IGMV, IPV4, NONE, PAY3), + TXGBE_PTT(0xF3, IP, IPV6, IGMV, IPV4, UDP, PAY4), + TXGBE_PTT(0xF4, IP, IPV6, IGMV, IPV4, TCP, PAY4), + TXGBE_PTT(0xF5, IP, IPV6, IGMV, IPV4, SCTP, PAY4), + TXGBE_UKN(0xF6), + TXGBE_UKN(0xF7), + TXGBE_UKN(0xF8), + TXGBE_PTT(0xF9, IP, IPV6, IGMV, FGV6, NONE, PAY3), + TXGBE_PTT(0xFA, IP, IPV6, IGMV, IPV6, NONE, PAY3), + TXGBE_PTT(0xFB, IP, IPV6, IGMV, IPV6, UDP, PAY4), + TXGBE_PTT(0xFC, IP, IPV6, IGMV, IPV6, TCP, PAY4), + TXGBE_PTT(0xFD, IP, IPV6, IGMV, IPV6, SCTP, PAY4), + TXGBE_UKN(0xFE), + TXGBE_UKN(0xFF), +}; + + +void txgbe_init_mac_link_ops(struct txgbe_hw *hw) +{ + struct txgbe_mac_info *mac = &hw->mac; + + DEBUGFUNC("\n"); + + /* + * enable the laser control functions for SFP+ fiber + * and MNG not enabled + */ + if ((TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber) && + !txgbe_mng_present(hw)) { + mac->ops.disable_tx_laser = + txgbe_disable_tx_laser_multispeed_fiber; + mac->ops.enable_tx_laser = + txgbe_enable_tx_laser_multispeed_fiber; + mac->ops.flap_tx_laser = txgbe_flap_tx_laser_multispeed_fiber; + + } else { + mac->ops.disable_tx_laser = + txgbe_disable_tx_laser_multispeed_fiber; + mac->ops.enable_tx_laser = + txgbe_enable_tx_laser_multispeed_fiber; + mac->ops.flap_tx_laser = txgbe_flap_tx_laser_multispeed_fiber; + } + + if (hw->phy.multispeed_fiber) { + /* Set up dual speed SFP+ support */ + mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber; + mac->ops.setup_mac_link = txgbe_setup_mac_link; + mac->ops.set_rate_select_speed = + txgbe_set_hard_rate_select_speed; + } else { + mac->ops.setup_link = txgbe_setup_mac_link; + mac->ops.set_rate_select_speed = + txgbe_set_hard_rate_select_speed; + } +} + +/** + * txgbe_init_phy_ops - PHY/SFP specific init + * @hw: pointer to hardware structure + * + * Initialize any function pointers that were not able to be + * set during init_shared_code because the PHY/SFP type was + * not known. Perform the SFP init if necessary. + * + **/ +s32 txgbe_init_phy_ops(struct txgbe_hw *hw) +{ + struct txgbe_mac_info *mac = &hw->mac; + s32 ret_val = 0; + + DEBUGFUNC("\n"); + + txgbe_init_i2c(hw); + /* Identify the PHY or SFP module */ + ret_val = TCALL(hw, phy.ops.identify); + if (ret_val == TXGBE_ERR_SFP_NOT_SUPPORTED) + goto init_phy_ops_out; + + /* Setup function pointers based on detected SFP module and speeds */ + txgbe_init_mac_link_ops(hw); + if (hw->phy.sfp_type != txgbe_sfp_type_unknown) + hw->phy.ops.reset = NULL; + + /* If copper media, overwrite with copper function pointers */ + if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper) { + hw->phy.type = txgbe_phy_xaui; + if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) { + mac->ops.setup_link = txgbe_setup_copper_link; + mac->ops.get_link_capabilities = + txgbe_get_copper_link_capabilities; + } + } + +init_phy_ops_out: + return ret_val; +} + + +/** + * txgbe_init_ops - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for sapphire. + * Does not touch the hardware. + **/ + +s32 txgbe_init_ops(struct txgbe_hw *hw) +{ + struct txgbe_mac_info *mac = &hw->mac; + struct txgbe_phy_info *phy = &hw->phy; + struct txgbe_eeprom_info *eeprom = &hw->eeprom; + struct txgbe_flash_info *flash = &hw->flash; + s32 ret_val = 0; + + DEBUGFUNC("\n"); + + /* PHY */ + phy->ops.reset = txgbe_reset_phy; + phy->ops.read_reg = txgbe_read_phy_reg; + phy->ops.write_reg = txgbe_write_phy_reg; + phy->ops.read_reg_mdi = txgbe_read_phy_reg_mdi; + phy->ops.write_reg_mdi = txgbe_write_phy_reg_mdi; + phy->ops.setup_link = txgbe_setup_phy_link; + phy->ops.setup_link_speed = txgbe_setup_phy_link_speed; + phy->ops.read_i2c_byte = txgbe_read_i2c_byte; + phy->ops.write_i2c_byte = txgbe_write_i2c_byte; + phy->ops.read_i2c_sff8472 = txgbe_read_i2c_sff8472; + phy->ops.read_i2c_eeprom = txgbe_read_i2c_eeprom; + phy->ops.write_i2c_eeprom = txgbe_write_i2c_eeprom; + phy->ops.identify_sfp = txgbe_identify_module; + phy->sfp_type = txgbe_sfp_type_unknown; + phy->ops.check_overtemp = txgbe_tn_check_overtemp; + phy->ops.identify = txgbe_identify_phy; + phy->ops.init = txgbe_init_phy_ops; + + /* MAC */ + mac->ops.init_hw = txgbe_init_hw; + mac->ops.clear_hw_cntrs = txgbe_clear_hw_cntrs; + mac->ops.get_mac_addr = txgbe_get_mac_addr; + mac->ops.stop_adapter = txgbe_stop_adapter; + mac->ops.get_bus_info = txgbe_get_bus_info; + mac->ops.set_lan_id = txgbe_set_lan_id_multi_port_pcie; + mac->ops.acquire_swfw_sync = txgbe_acquire_swfw_sync; + mac->ops.release_swfw_sync = txgbe_release_swfw_sync; + mac->ops.reset_hw = txgbe_reset_hw; + mac->ops.get_media_type = txgbe_get_media_type; + mac->ops.disable_sec_rx_path = txgbe_disable_sec_rx_path; + mac->ops.enable_sec_rx_path = txgbe_enable_sec_rx_path; + mac->ops.enable_rx_dma = txgbe_enable_rx_dma; + mac->ops.start_hw = txgbe_start_hw; + mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr; + mac->ops.set_san_mac_addr = txgbe_set_san_mac_addr; + mac->ops.get_device_caps = txgbe_get_device_caps; + mac->ops.get_wwn_prefix = txgbe_get_wwn_prefix; + mac->ops.setup_eee = txgbe_setup_eee; + + /* LEDs */ + mac->ops.led_on = txgbe_led_on; + mac->ops.led_off = txgbe_led_off; + + /* RAR, Multicast, VLAN */ + mac->ops.set_rar = txgbe_set_rar; + mac->ops.clear_rar = txgbe_clear_rar; + mac->ops.init_rx_addrs = txgbe_init_rx_addrs; + mac->ops.update_uc_addr_list = txgbe_update_uc_addr_list; + mac->ops.update_mc_addr_list = txgbe_update_mc_addr_list; + mac->ops.enable_mc = txgbe_enable_mc; + mac->ops.disable_mc = txgbe_disable_mc; + mac->ops.enable_rx = txgbe_enable_rx; + mac->ops.disable_rx = txgbe_disable_rx; + mac->ops.set_vmdq_san_mac = txgbe_set_vmdq_san_mac; + mac->ops.insert_mac_addr = txgbe_insert_mac_addr; + mac->rar_highwater = 1; + mac->ops.set_vfta = txgbe_set_vfta; + mac->ops.set_vlvf = txgbe_set_vlvf; + mac->ops.clear_vfta = txgbe_clear_vfta; + mac->ops.init_uta_tables = txgbe_init_uta_tables; + mac->ops.set_mac_anti_spoofing = txgbe_set_mac_anti_spoofing; + mac->ops.set_vlan_anti_spoofing = txgbe_set_vlan_anti_spoofing; + mac->ops.set_ethertype_anti_spoofing = + txgbe_set_ethertype_anti_spoofing; + + /* Flow Control */ + mac->ops.fc_enable = txgbe_fc_enable; + mac->ops.setup_fc = txgbe_setup_fc; + + /* Link */ + mac->ops.get_link_capabilities = txgbe_get_link_capabilities; + mac->ops.check_link = txgbe_check_mac_link; + mac->ops.setup_rxpba = txgbe_set_rxpba; + mac->mcft_size = TXGBE_SP_MC_TBL_SIZE; + mac->vft_size = TXGBE_SP_VFT_TBL_SIZE; + mac->num_rar_entries = TXGBE_SP_RAR_ENTRIES; + mac->rx_pb_size = TXGBE_SP_RX_PB_SIZE; + mac->max_rx_queues = TXGBE_SP_MAX_RX_QUEUES; + mac->max_tx_queues = TXGBE_SP_MAX_TX_QUEUES; + mac->max_msix_vectors = txgbe_get_pcie_msix_count(hw); + + mac->arc_subsystem_valid = (rd32(hw, TXGBE_MIS_ST) & + TXGBE_MIS_ST_MNG_INIT_DN) ? true : false; + + hw->mbx.ops.init_params = txgbe_init_mbx_params_pf; + + /* EEPROM */ + eeprom->ops.init_params = txgbe_init_eeprom_params; + eeprom->ops.calc_checksum = txgbe_calc_eeprom_checksum; + eeprom->ops.read = txgbe_read_ee_hostif; + eeprom->ops.read_buffer = txgbe_read_ee_hostif_buffer; + eeprom->ops.write = txgbe_write_ee_hostif; + eeprom->ops.write_buffer = txgbe_write_ee_hostif_buffer; + eeprom->ops.update_checksum = txgbe_update_eeprom_checksum; + eeprom->ops.validate_checksum = txgbe_validate_eeprom_checksum; + + /* FLASH */ + flash->ops.init_params = txgbe_init_flash_params; + flash->ops.read_buffer = txgbe_read_flash_buffer; + flash->ops.write_buffer = txgbe_write_flash_buffer; + + /* Manageability interface */ + mac->ops.set_fw_drv_ver = txgbe_set_fw_drv_ver; + + mac->ops.get_thermal_sensor_data = + txgbe_get_thermal_sensor_data; + mac->ops.init_thermal_sensor_thresh = + txgbe_init_thermal_sensor_thresh; + + return ret_val; +} + +/** + * txgbe_get_link_capabilities - Determines link capabilities + * @hw: pointer to hardware structure + * @speed: pointer to link speed + * @autoneg: true when autoneg or autotry is enabled + * + * Determines the link capabilities by reading the AUTOC register. + **/ +s32 txgbe_get_link_capabilities(struct txgbe_hw *hw, + u32 *speed, + bool *autoneg) +{ + s32 status = 0; + u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl; + u32 sr_an_mmd_adv_reg2; + + DEBUGFUNC("\n"); + + /* Check if 1G SFP module. */ + if (hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 || + hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 || + hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1) { + *speed = TXGBE_LINK_SPEED_1GB_FULL; + *autoneg = false; + } else if (hw->phy.multispeed_fiber) { + *speed = TXGBE_LINK_SPEED_10GB_FULL | + TXGBE_LINK_SPEED_1GB_FULL; + *autoneg = true; + } + /* SFP */ + else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + } + /* XAUI */ + else if ((txgbe_get_media_type(hw) == txgbe_media_type_copper) && + ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI || + (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_T; + } + /* SGMII */ + else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) { + *speed = TXGBE_LINK_SPEED_1GB_FULL | + TXGBE_LINK_SPEED_100_FULL | + TXGBE_LINK_SPEED_10_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_T | + TXGBE_PHYSICAL_LAYER_100BASE_TX; + /* MAC XAUI */ + } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4; + /* MAC SGMII */ + } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + *speed = TXGBE_LINK_SPEED_1GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX; + } + /* KR KX KX4 */ + else { + /* + * Determine link capabilities based on the stored value, + * which represents EEPROM defaults. If value has not + * been stored, use the current register values. + */ + if (hw->mac.orig_link_settings_stored) { + sr_pcs_ctl = hw->mac.orig_sr_pcs_ctl2; + sr_pma_mmd_ctl1 = hw->mac.orig_sr_pma_mmd_ctl1; + sr_an_mmd_ctl = hw->mac.orig_sr_an_mmd_ctl; + sr_an_mmd_adv_reg2 = hw->mac.orig_sr_an_mmd_adv_reg2; + } else { + sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2); + sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, + TXGBE_SR_PMA_MMD_CTL1); + sr_an_mmd_ctl = txgbe_rd32_epcs(hw, + TXGBE_SR_AN_MMD_CTL); + sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw, + TXGBE_SR_AN_MMD_ADV_REG2); + } + + if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) == + TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X && + (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK) + == TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G && + (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) { + /* 1G or KX - no backplane auto-negotiation */ + *speed = TXGBE_LINK_SPEED_1GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX; + } else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) == + TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X && + (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK) + == TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G && + (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4; + } else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) == + TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R && + (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) { + /* 10 GbE serial link (KR -no backplane auto-negotiation) */ + *speed = TXGBE_LINK_SPEED_10GB_FULL; + *autoneg = false; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR; + } else if ((sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE)) { + /* KX/KX4/KR backplane auto-negotiation enable */ + *speed = TXGBE_LINK_SPEED_UNKNOWN; + if (sr_an_mmd_adv_reg2 & + TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR) + *speed |= TXGBE_LINK_SPEED_10GB_FULL; + if (sr_an_mmd_adv_reg2 & + TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4) + *speed |= TXGBE_LINK_SPEED_10GB_FULL; + if (sr_an_mmd_adv_reg2 & + TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX) + *speed |= TXGBE_LINK_SPEED_1GB_FULL; + *autoneg = true; + hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR | + TXGBE_PHYSICAL_LAYER_10GBASE_KX4 | + TXGBE_PHYSICAL_LAYER_1000BASE_KX; + } else { + status = TXGBE_ERR_LINK_SETUP; + goto out; + } + } + +out: + return status; +} + +/** + * txgbe_get_media_type - Get media type + * @hw: pointer to hardware structure + * + * Returns the media type (fiber, copper, backplane) + **/ +enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw) +{ + enum txgbe_media_type media_type; + u8 device_type = hw->subsystem_id & 0xF0; + + DEBUGFUNC("\n"); + + /* Detect if there is a copper PHY attached. */ + switch (hw->phy.type) { + case txgbe_phy_cu_unknown: + case txgbe_phy_tn: + media_type = txgbe_media_type_copper; + goto out; + default: + break; + } + + switch (device_type) { + case TXGBE_ID_MAC_XAUI: + case TXGBE_ID_MAC_SGMII: + case TXGBE_ID_KR_KX_KX4: + /* Default device ID is mezzanine card KX/KX4 */ + media_type = txgbe_media_type_backplane; + break; + case TXGBE_ID_SFP: + media_type = txgbe_media_type_fiber; + break; + case TXGBE_ID_XAUI: + case TXGBE_ID_SGMII: + media_type = txgbe_media_type_copper; + break; + case TXGBE_ID_SFI_XAUI: + if (hw->bus.lan_id == 0) + media_type = txgbe_media_type_fiber; + else + media_type = txgbe_media_type_copper; + break; + default: + media_type = txgbe_media_type_unknown; + break; + } +out: + return media_type; +} + +/** + * txgbe_stop_mac_link_on_d3 - Disables link on D3 + * @hw: pointer to hardware structure + * + * Disables link during D3 power down sequence. + * + **/ +void txgbe_stop_mac_link_on_d3(struct txgbe_hw *hw) +{ + /* fix autoc2 */ + UNREFERENCED_PARAMETER(hw); + return; +} + + +/** + * txgbe_disable_tx_laser_multispeed_fiber - Disable Tx laser + * @hw: pointer to hardware structure + * + * The base drivers may require better control over SFP+ module + * PHY states. This includes selectively shutting down the Tx + * laser on the PHY, effectively halting physical link. + **/ +void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw) +{ + u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR); + + /* Blocked by MNG FW so bail */ + txgbe_check_reset_blocked(hw); + + /* Disable Tx laser; allow 100us to go dark per spec */ + esdp_reg |= TXGBE_GPIO_DR_1 | TXGBE_GPIO_DR_0; + wr32(hw, TXGBE_GPIO_DR, esdp_reg); + TXGBE_WRITE_FLUSH(hw); + usec_delay(100); +} + +/** + * txgbe_enable_tx_laser_multispeed_fiber - Enable Tx laser + * @hw: pointer to hardware structure + * + * The base drivers may require better control over SFP+ module + * PHY states. This includes selectively turning on the Tx + * laser on the PHY, effectively starting physical link. + **/ +void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw) +{ + /* Enable Tx laser; allow 100ms to light up */ + wr32m(hw, TXGBE_GPIO_DR, + TXGBE_GPIO_DR_0 | TXGBE_GPIO_DR_1, 0); + TXGBE_WRITE_FLUSH(hw); + msec_delay(100); +} + +/** + * txgbe_flap_tx_laser_multispeed_fiber - Flap Tx laser + * @hw: pointer to hardware structure + * + * When the driver changes the link speeds that it can support, + * it sets autotry_restart to true to indicate that we need to + * initiate a new autotry session with the link partner. To do + * so, we set the speed then disable and re-enable the Tx laser, to + * alert the link partner that it also needs to restart autotry on its + * end. This is consistent with true clause 37 autoneg, which also + * involves a loss of signal. + **/ +void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw) +{ + DEBUGFUNC("\n"); + + /* Blocked by MNG FW so bail */ + txgbe_check_reset_blocked(hw); + + if (hw->mac.autotry_restart) { + txgbe_disable_tx_laser_multispeed_fiber(hw); + txgbe_enable_tx_laser_multispeed_fiber(hw); + hw->mac.autotry_restart = false; + } +} + +/** + * txgbe_set_hard_rate_select_speed - Set module link speed + * @hw: pointer to hardware structure + * @speed: link speed to set + * + * Set module link speed via RS0/RS1 rate select pins. + */ +void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw, + u32 speed) +{ + u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR); + + switch (speed) { + case TXGBE_LINK_SPEED_10GB_FULL: + esdp_reg |= TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4; + break; + case TXGBE_LINK_SPEED_1GB_FULL: + esdp_reg &= ~(TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4); + break; + default: + DEBUGOUT("Invalid fixed module speed\n"); + return; + } + + wr32(hw, TXGBE_GPIO_DDR, + TXGBE_GPIO_DDR_5 | TXGBE_GPIO_DDR_4 | + TXGBE_GPIO_DDR_1 | TXGBE_GPIO_DDR_0); + + wr32(hw, TXGBE_GPIO_DR, esdp_reg); + + TXGBE_WRITE_FLUSH(hw); +} + +s32 txgbe_enable_rx_adapter(struct txgbe_hw *hw) +{ + u32 value; + + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL); + value |= 1 << 12; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value); + + value = 0; + while (!(value >> 11)) { + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_AD_ACK); + msleep(1); + } + + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL); + value &= ~(1 << 12); + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value); + + return 0; +} + +s32 txgbe_set_sgmii_an37_ability(struct txgbe_hw *hw) +{ + u32 value; + + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0x3002); + /* for sgmii + external phy, set to 0x0105 (mac sgmii mode) */ + if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) { + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0105); + } + /* for sgmii direct link, set to 0x010c (phy sgmii mode) */ + if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) { + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x010c); + } + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0200); + value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_CTL); + value = (value & ~0x1200) | (0x1 << 12) | (0x1 << 9); + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, value); + return 0; +} + + +s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg) +{ + u32 i; + s32 status = 0; + u32 value = 0; + struct txgbe_adapter *adapter = hw->back; + + /* 1. Wait xpcs power-up good */ + for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) == + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD) + break; + msleep(10); + } + if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) { + status = TXGBE_ERR_XPCS_POWER_UP_FAILED; + goto out; + } + e_dev_info("It is set to kr.\n"); + + txgbe_wr32_epcs(hw, 0x78001, 0x7); + txgbe_wr32_epcs(hw, 0x18035, 0x00FC); + txgbe_wr32_epcs(hw, 0x18055, 0x00FC); + + if (1) { + /* 2. Disable xpcs AN-73 */ + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000); + +#if 0 + if (autoneg) + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000); + else + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); +#endif + txgbe_wr32_epcs(hw, 0x78003, 0x1); + if (!(adapter->backplane_an == 1)) { + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000); + txgbe_wr32_epcs(hw, 0x78003, 0x0); + } + + if (KR_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KR) { + e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + value = (0x1804 & ~0x3F3F); + value |= adapter->ffe_main << 8 | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + + if (KR_AN73_PRESET == 1) { + txgbe_wr32_epcs(hw, 0x18037, 0x80); + } + + if (KR_POLLING == 1) { + txgbe_wr32_epcs(hw, 0x18006, 0xffff); + txgbe_wr32_epcs(hw, 0x18008, 0xA697); + } + + /* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register */ + /* Bit[10:0](MPLLA_BANDWIDTH) = 11'd123 (default: 11'd16) */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, + TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR); + + /* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register */ + /* Bit[12:8](RX_VREF_CTRL) = 5'hF (default: 5'h11) */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, + 0xCF00); + + /* 5. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register */ + /* Bit[15:8](VGA1/2_GAIN_0) = 8'h77, Bit[7:5](CTLE_POLE_0) = 3'h2 + * Bit[4:0](CTLE_BOOST_0) = 4'hA + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, + 0x774A); + + /* 6. Set VR_MII_Gen5_12G_RX_GENCTRL3 Register */ + /* Bit[2:0](LOS_TRSHLD_0) = 3'h4 (default: 3) */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, + 0x0004); + /* 7. Initialize the mode by setting VR XS or PCS MMD Digital */ + /* Control1 Register Bit[15](VR_RST) */ + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, + 0xA000); + /* wait phy initialization done */ + for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0) + break; + msleep(100); + } + if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) { + status = TXGBE_ERR_PHY_INIT_NOT_DONE; + goto out; + } + } else { + txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL, + 0x1); + } +out: + return status; +} + +s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg) +{ + u32 i; + s32 status = 0; + u32 value; + struct txgbe_adapter *adapter = hw->back; + + /* check link status, if already set, skip setting it again */ + if (hw->link_status == TXGBE_LINK_STATUS_KX4) { + goto out; + } + e_dev_info("It is set to kx4.\n"); + + /* 1. Wait xpcs power-up good */ + for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) == + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD) + break; + msleep(10); + } + if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) { + status = TXGBE_ERR_XPCS_POWER_UP_FAILED; + goto out; + } + + wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, + ~TXGBE_MAC_TX_CFG_TE); + + /* 2. Disable xpcs AN-73 */ + if (!autoneg) + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + else + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000); + + if (hw->revision_id == TXGBE_SP_MPW) { + /* Disable PHY MPLLA */ + txgbe_wr32_ephy(hw, 0x4, 0x2501); + /* Reset rx lane0-3 clock */ + txgbe_wr32_ephy(hw, 0x1005, 0x4001); + txgbe_wr32_ephy(hw, 0x1105, 0x4001); + txgbe_wr32_ephy(hw, 0x1205, 0x4001); + txgbe_wr32_ephy(hw, 0x1305, 0x4001); + } else { + /* Disable PHY MPLLA for eth mode change(after ECO) */ + txgbe_wr32_ephy(hw, 0x4, 0x250A); + TXGBE_WRITE_FLUSH(hw); + msleep(1); + + /* Set the eth change_mode bit first in mis_rst register + * for corresponding LAN port + */ + if (hw->bus.lan_id == 0) + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN0_CHG_ETH_MODE); + else + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN1_CHG_ETH_MODE); + } + + /* Set SR PCS Control2 Register Bits[1:0] = 2'b01 PCS_TYPE_SEL: non KR */ + txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, + TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X); + /* Set SR PMA MMD Control1 Register Bit[13] = 1'b1 SS13: 10G speed */ + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, + TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G); + + value = (0xf5f0 & ~0x7F0) | (0x5 << 8) | (0x7 << 5) | 0xF0; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value); + + if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); + else + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00); + + if (KX4_SET == 1 || adapter->ffe_set) { + e_dev_info("Set KX4 TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + value = (0x1804 & ~0x3F3F); + value |= adapter->ffe_main << 8 | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } else { + value = (0x1804 & ~0x3F3F); + value |= 40 << 8 ; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + + } + for (i = 0; i < 4; i++) { + if (i == 0) + value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6; + else + value = (0xff06 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value); + } + + value = 0x0 & ~0x7777; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value); + + txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0); + + value = (0x6db & ~0xFFF) | (0x1 << 9) | (0x1 << 6) | (0x1 << 3) | 0x1; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA */ + /* Control 0 Register Bit[7:0] = 8'd40 MPLLA_MULTIPLIER */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, + TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER); + /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA */ + /* Control 3 Register Bit[10:0] = 11'd86 MPLLA_BANDWIDTH */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, + TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Load 0 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_0 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, + TXGBE_PHY_VCO_CAL_LD0_OTHER); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Load 1 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_1 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, + TXGBE_PHY_VCO_CAL_LD0_OTHER); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Load 2 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_2 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, + TXGBE_PHY_VCO_CAL_LD0_OTHER); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Load 3 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_3 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, + TXGBE_PHY_VCO_CAL_LD0_OTHER); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Reference 0 Register Bit[5:0] = 6'd34 VCO_REF_LD_0/1 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, + 0x2222); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */ + /* Calibration Reference 1 Register Bit[5:0] = 6'd34 VCO_REF_LD_2/3 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, + 0x2222); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE */ + /* Enable Register Bit[7:0] = 8'd0 AFE_EN_0/3_1, DFE_EN_0/3_1 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, + 0x0); + + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx */ + /* Equalization Control 4 Register Bit[3:0] = 4'd0 CONT_ADAPT_0/3_1 */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, + 0x00F0); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate */ + /* Control Register Bit[14:12], Bit[10:8], Bit[6:4], Bit[2:0], + * all rates to 3'b010 TX0/1/2/3_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, + 0x2222); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate */ + /* Control Register Bit[13:12], Bit[9:8], Bit[5:4], Bit[1:0], + * all rates to 2'b10 RX0/1/2/3_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, + 0x2222); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General */ + /* Control 2 Register Bit[15:8] = 2'b01 TX0/1/2/3_WIDTH: 10bits */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, + 0x5500); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General */ + /* Control 2 Register Bit[15:8] = 2'b01 RX0/1/2/3_WIDTH: 10bits */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, + 0x5500); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control + * 2 Register Bit[10:8] = 3'b010 + * MPLLA_DIV16P5_CLK_EN=0, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, + TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10); + + txgbe_wr32_epcs(hw, 0x1f0000, 0x0); + txgbe_wr32_epcs(hw, 0x1f8001, 0x0); + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0); + + if (KX4_TXRX_PIN == 1) + txgbe_wr32_epcs(hw, 0x38001, 0xff); + /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1 + * Register Bit[15](VR_RST) + */ + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000); + /* wait phy initialization done */ + for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0) + break; + msleep(100); + } + + /* if success, set link status */ + hw->link_status = TXGBE_LINK_STATUS_KX4; + + if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) { + status = TXGBE_ERR_PHY_INIT_NOT_DONE; + goto out; + } + +out: + return status; +} + + +s32 txgbe_set_link_to_kx(struct txgbe_hw *hw, + u32 speed, + bool autoneg) +{ + u32 i; + s32 status = 0; + u32 wdata = 0; + u32 value; + struct txgbe_adapter *adapter = hw->back; + + /* check link status, if already set, skip setting it again */ + if (hw->link_status == TXGBE_LINK_STATUS_KX) { + goto out; + } + e_dev_info("It is set to kx. speed =0x%x\n", speed); + + txgbe_wr32_epcs(hw, 0x18035, 0x00FC); + txgbe_wr32_epcs(hw, 0x18055, 0x00FC); + + /* 1. Wait xpcs power-up good */ + for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) == + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD) + break; + msleep(10); + } + if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) { + status = TXGBE_ERR_XPCS_POWER_UP_FAILED; + goto out; + } + + wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, + ~TXGBE_MAC_TX_CFG_TE); + + /* 2. Disable xpcs AN-73 */ + if (!autoneg) + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + else + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000); + + if (hw->revision_id == TXGBE_SP_MPW) { + /* Disable PHY MPLLA */ + txgbe_wr32_ephy(hw, 0x4, 0x2401); + /* Reset rx lane0 clock */ + txgbe_wr32_ephy(hw, 0x1005, 0x4001); + } else { + /* Disable PHY MPLLA for eth mode change(after ECO) */ + txgbe_wr32_ephy(hw, 0x4, 0x240A); + TXGBE_WRITE_FLUSH(hw); + msleep(1); + + /* Set the eth change_mode bit first in mis_rst register */ + /* for corresponding LAN port */ + if (hw->bus.lan_id == 0) + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN0_CHG_ETH_MODE); + else + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN1_CHG_ETH_MODE); + } + + /* Set SR PCS Control2 Register Bits[1:0] = 2'b01 PCS_TYPE_SEL: non KR */ + txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, + TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X); + + /* Set SR PMA MMD Control1 Register Bit[13] = 1'b0 SS13: 1G speed */ + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, + TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G); + + /* Set SR MII MMD Control Register to corresponding speed: {Bit[6], + * Bit[13]}=[2'b00,2'b01,2'b10]->[10M,100M,1G] + */ + if (speed == TXGBE_LINK_SPEED_100_FULL) + wdata = 0x2100; + else if (speed == TXGBE_LINK_SPEED_1GB_FULL) + wdata = 0x0140; + else if (speed == TXGBE_LINK_SPEED_10_FULL) + wdata = 0x0100; + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, + wdata); + + value = (0xf5f0 & ~0x710) | (0x5 << 8)| 0x10; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value); + + if (KX_SGMII == 1) + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00); + else + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); + + if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) { + e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } else { + value = (0x1804 & ~0x3F3F) | (24 << 8) | 4; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | 16 | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + + for (i = 0; i < 4; i++) { + if (i) { + value = 0xff06; + } else { + value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6; + } + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value); + } + + value = 0x0 & ~0x7; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value); + + txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0); + + value = (0x6db & ~0x7) | 0x4; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control + * 0 Register Bit[7:0] = 8'd32 MPLLA_MULTIPLIER + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, + TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX); + + /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control 3 + * Register Bit[10:0] = 11'd70 MPLLA_BANDWIDTH + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, + TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO + * Calibration Load 0 Register Bit[12:0] = 13'd1344 VCO_LD_VAL_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, + TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX); + + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, 0x549); + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, 0x549); + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, 0x549); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO + * Calibration Reference 0 Register Bit[5:0] = 6'd42 VCO_REF_LD_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, + TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX); + + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2929); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE Enable + * Register Bit[4], Bit[0] = 1'b0 AFE_EN_0, DFE_EN_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, + 0x0); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx + * Equalization Control 4 Register Bit[0] = 1'b0 CONT_ADAPT_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, + 0x0010); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate + * Control Register Bit[2:0] = 3'b011 TX0_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, + TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate + * Control Register Bit[2:0] = 3'b011 RX0_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, + TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX); + + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General + * Control 2 Register Bit[9:8] = 2'b01 TX0_WIDTH: 10bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, + TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General + * Control 2 Register Bit[9:8] = 2'b01 RX0_WIDTH: 10bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, + TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control + * 2 Register Bit[10:8] = 3'b010 MPLLA_DIV16P5_CLK_EN=0, + * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, + TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10); + /* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */ + /* Set to 8bit MII (required in 10M/100M SGMII) */ + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, + 0x0100); + + /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1 + * Register Bit[15](VR_RST) + */ + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000); + /* wait phy initialization done */ + for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0) + break; + msleep(100); + } + + /* if success, set link status */ + hw->link_status = TXGBE_LINK_STATUS_KX; + + if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) { + status = TXGBE_ERR_PHY_INIT_NOT_DONE; + goto out; + } + +out: + return status; +} + +s32 txgbe_set_link_to_sfi(struct txgbe_hw *hw, + u32 speed) +{ + u32 i; + s32 status = 0; + u32 value = 0; + struct txgbe_adapter *adapter = hw->back; + + /* Set the module link speed */ + TCALL(hw, mac.ops.set_rate_select_speed, + speed); + + e_dev_info("It is set to sfi.\n"); + /* 1. Wait xpcs power-up good */ + for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) == + TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD) + break; + msleep(10); + } + if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) { + status = TXGBE_ERR_XPCS_POWER_UP_FAILED; + goto out; + } + + wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE, + ~TXGBE_MAC_TX_CFG_TE); + + /* 2. Disable xpcs AN-73 */ + txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0); + + if (hw->revision_id != TXGBE_SP_MPW) { + /* Disable PHY MPLLA for eth mode change(after ECO) */ + txgbe_wr32_ephy(hw, 0x4, 0x243A); + TXGBE_WRITE_FLUSH(hw); + msleep(1); + /* Set the eth change_mode bit first in mis_rst register + * for corresponding LAN port + */ + if (hw->bus.lan_id == 0) + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN0_CHG_ETH_MODE); + else + wr32(hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_LAN1_CHG_ETH_MODE); + } + if (speed == TXGBE_LINK_SPEED_10GB_FULL) { + /* @. Set SR PCS Control2 Register Bits[1:0] = 2'b00 PCS_TYPE_SEL: KR */ + txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0); + value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1); + value = value | 0x2000; + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, value); + /* @. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL0 Register Bit[7:0] = 8'd33 + * MPLLA_MULTIPLIER + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0021); + /* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register + * Bit[10:0](MPLLA_BANDWIDTH) = 11'd0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0); + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1); + value = (value & ~0x700) | 0x500; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value); + /* 4.Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register Bit[12:8](RX_VREF_CTRL) + * = 5'hF + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); + /* @. Set VR_XS_PMA_Gen5_12G_VCO_CAL_LD0 Register Bit[12:0] = 13'd1353 + * VCO_LD_VAL_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0549); + /* @. Set VR_XS_PMA_Gen5_12G_VCO_CAL_REF0 Register Bit[5:0] = 6'd41 + * VCO_REF_LD_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x0029); + /* @. Set VR_XS_PMA_Gen5_12G_TX_RATE_CTRL Register Bit[2:0] = 3'b000 + * TX0_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0); + /* @. Set VR_XS_PMA_Gen5_12G_RX_RATE_CTRL Register Bit[2:0] = 3'b000 + * RX0_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0); + /* @. Set VR_XS_PMA_Gen5_12G_TX_GENCTRL2 Register Bit[9:8] = 2'b11 + * TX0_WIDTH: 20bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0300); + /* @. Set VR_XS_PMA_Gen5_12G_RX_GENCTRL2 Register Bit[9:8] = 2'b11 + * RX0_WIDTH: 20bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0300); + /* @. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL2 Register Bit[10:8] = 3'b110 + * MPLLA_DIV16P5_CLK_EN=1, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0600); + if (SFI_SET == 1 || adapter->ffe_set) { + e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } else { + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (24 << 8) | 4; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | 16 | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 || + hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) { + /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register + * Bit[15:8](VGA1/2_GAIN_0) = 8'h77, Bit[7:5] + * (CTLE_POLE_0) = 3'h2, Bit[4:0](CTLE_BOOST_0) = 4'hF + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F); + + } else { + /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register Bit[15:8] + * (VGA1/2_GAIN_0) = 8'h00, Bit[7:5](CTLE_POLE_0) = 3'h2, + * Bit[4:0](CTLE_BOOST_0) = 4'hA + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0); + value = (value & ~0xFFFF) | (2 << 5) | 0x05; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value); + } + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0); + value = (value & ~0x7) | 0x0; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value); + + if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 || + hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) { + /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0) + * = 8'd20 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0014); + value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE); + value = (value & ~0x11) | 0x11; + txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value); + } else { + /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0) + * = 8'd20 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0xBE); + /* 9. Set VR_MII_Gen5_12G_AFE_DFE_EN_CTRL Register Bit[4](DFE_EN_0) = + * 1'b0, Bit[0](AFE_EN_0) = 1'b0 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE); + value = (value & ~0x11) | 0x0; + txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value); + } + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL); + value = value & ~0x1; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value); + } else { + if (hw->revision_id == TXGBE_SP_MPW) { + /* Disable PHY MPLLA */ + txgbe_wr32_ephy(hw, 0x4, 0x2401); + /* Reset rx lane0 clock */ + txgbe_wr32_ephy(hw, 0x1005, 0x4001); + } + /* @. Set SR PCS Control2 Register Bits[1:0] = 2'b00 PCS_TYPE_SEL: KR */ + txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0x1); + /* Set SR PMA MMD Control1 Register Bit[13] = 1'b0 SS13: 1G speed */ + txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, 0x0000); + /* Set SR MII MMD Control Register to corresponding speed: */ + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, 0x0140); + + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1); + value = (value & ~0x710) | 0x500; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value); + /* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register Bit[12:8](RX_VREF_CTRL) + * = 5'hF + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (24 << 8) | 4; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | 16 | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 || + hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) { + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F); + } else { + /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register Bit[15:8] + * (VGA1/2_GAIN_0) = 8'h00, Bit[7:5](CTLE_POLE_0) = 3'h2, + * Bit[4:0](CTLE_BOOST_0) = 4'hA + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0); + value = (value & ~0xFFFF) | 0x7706; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value); + } + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0); + value = (value & ~0x7) | 0x0; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value); + /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0) + * = 8'd00 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0); + /* Set VR_XS_PMA_Gen5_12G_RX_GENCTRL3 Register Bit[2:0] LOS_TRSHLD_0 = 4 */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3); + value = (value & ~0x7) | 0x4; + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY + * MPLLA Control 0 Register Bit[7:0] = 8'd32 MPLLA_MULTIPLIER + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0020); + /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control + * 3 Register Bit[10:0] = 11'd70 MPLLA_BANDWIDTH + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0x0046); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO + * Calibration Load 0 Register Bit[12:0] = 13'd1344 VCO_LD_VAL_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0540); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO + * Calibration Reference 0 Register Bit[5:0] = 6'd42 VCO_REF_LD_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x002A); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE + * Enable Register Bit[4], Bit[0] = 1'b0 AFE_EN_0, DFE_EN_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx + * Equalization Control 4 Register Bit[0] = 1'b0 CONT_ADAPT_0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x0010); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate + * Control Register Bit[2:0] = 3'b011 TX0_RATE + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x0003); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate + * Control Register Bit[2:0] = 3'b011 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x0003); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General + * Control 2 Register Bit[9:8] = 2'b01 TX0_WIDTH: 10bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0100); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General + * Control 2 Register Bit[9:8] = 2'b01 RX0_WIDTH: 10bits + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0100); + /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA + * Control 2 Register Bit[10:8] = 3'b010 MPLLA_DIV16P5_CLK_EN=0, + * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0 + */ + txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0200); + /* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */ + txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0100); + } + /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1 + * Register Bit[15](VR_RST) + */ + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000); + /* wait phy initialization done */ + for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0) + break; + msleep(100); + } + if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) { + status = TXGBE_ERR_PHY_INIT_NOT_DONE; + goto out; + } + +out: + return status; +} + + +/** + * txgbe_setup_mac_link - Set MAC link speed + * @hw: pointer to hardware structure + * @speed: new link speed + * @autoneg_wait_to_complete: true when waiting for completion is needed + * + * Set the link speed in the AUTOC register and restarts link. + **/ +s32 txgbe_setup_mac_link(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete) +{ + bool autoneg = false; + s32 status = 0; + u32 link_capabilities = TXGBE_LINK_SPEED_UNKNOWN; + struct txgbe_adapter *adapter = hw->back; + u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN; + bool link_up = false; + + UNREFERENCED_PARAMETER(autoneg_wait_to_complete); + DEBUGFUNC("\n"); + + /* Check to see if speed passed in is supported. */ + status = TCALL(hw, mac.ops.get_link_capabilities, + &link_capabilities, &autoneg); + if (status) + goto out; + + speed &= link_capabilities; + + if (speed == TXGBE_LINK_SPEED_UNKNOWN) { + status = TXGBE_ERR_LINK_SETUP; + goto out; + } + + if (!(((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) || + ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_XAUI) || + ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII))) { + status = TCALL(hw, mac.ops.check_link, + &link_speed, &link_up, false); + if (status != 0) + goto out; + if ((link_speed == speed) && link_up) + goto out; + } + + if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) + goto out; + + if ((hw->subsystem_id & 0xF0) == TXGBE_ID_KR_KX_KX4) { + if (!autoneg) { + switch (hw->phy.link_mode) { + case TXGBE_PHYSICAL_LAYER_10GBASE_KR: + txgbe_set_link_to_kr(hw, autoneg); + break; + case TXGBE_PHYSICAL_LAYER_10GBASE_KX4: + txgbe_set_link_to_kx4(hw, autoneg); + break; + case TXGBE_PHYSICAL_LAYER_1000BASE_KX: + txgbe_set_link_to_kx(hw, speed, autoneg); + break; + default: + status = TXGBE_ERR_PHY; + goto out; + } + } else { + txgbe_set_link_to_kr(hw, autoneg); + } + } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI || + ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) || + (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII || + ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) || + (txgbe_get_media_type(hw) == txgbe_media_type_copper && + (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) { + if (speed == TXGBE_LINK_SPEED_10GB_FULL) { + txgbe_set_link_to_kx4(hw, autoneg); + } else { + txgbe_set_link_to_kx(hw, speed, 0); + if (adapter->an37 || + (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII || + (hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) + txgbe_set_sgmii_an37_ability(hw); + } + } else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) { + txgbe_set_link_to_sfi(hw, speed); + } + +out: + return status; +} + +/** + * txgbe_setup_copper_link - Set the PHY autoneg advertised field + * @hw: pointer to hardware structure + * @speed: new link speed + * @autoneg_wait_to_complete: true if waiting is needed to complete + * + * Restarts link on PHY and MAC based on settings passed in. + **/ +STATIC s32 txgbe_setup_copper_link(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete) +{ + s32 status; + u32 link_speed; + + DEBUGFUNC("\n"); + + /* Setup the PHY according to input speed */ + link_speed = TCALL(hw, phy.ops.setup_link_speed, speed, + autoneg_wait_to_complete); + + if (link_speed != TXGBE_LINK_SPEED_UNKNOWN) + /* Set up MAC */ + status = txgbe_setup_mac_link(hw, link_speed, autoneg_wait_to_complete); + else { + status = 0; + } + return status; +} + +int txgbe_reset_misc(struct txgbe_hw *hw) +{ + int i; + u32 value; + + txgbe_init_i2c(hw); + + value = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2); + if ((value & 0x3) != TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X) { + hw->link_status = TXGBE_LINK_STATUS_NONE; + } + + /* receive packets that size > 2048 */ + wr32m(hw, TXGBE_MAC_RX_CFG, + TXGBE_MAC_RX_CFG_JE, TXGBE_MAC_RX_CFG_JE); + + /* clear counters on read */ + wr32m(hw, TXGBE_MMC_CONTROL, + TXGBE_MMC_CONTROL_RSTONRD, TXGBE_MMC_CONTROL_RSTONRD); + + wr32m(hw, TXGBE_MAC_RX_FLOW_CTRL, + TXGBE_MAC_RX_FLOW_CTRL_RFE, TXGBE_MAC_RX_FLOW_CTRL_RFE); + + wr32(hw, TXGBE_MAC_PKT_FLT, + TXGBE_MAC_PKT_FLT_PR); + + wr32m(hw, TXGBE_MIS_RST_ST, + TXGBE_MIS_RST_ST_RST_INIT, 0x1E00); + + /* errata 4: initialize mng flex tbl and wakeup flex tbl*/ + wr32(hw, TXGBE_PSR_MNG_FLEX_SEL, 0); + for (i = 0; i < 16; i++) { + wr32(hw, TXGBE_PSR_MNG_FLEX_DW_L(i), 0); + wr32(hw, TXGBE_PSR_MNG_FLEX_DW_H(i), 0); + wr32(hw, TXGBE_PSR_MNG_FLEX_MSK(i), 0); + } + wr32(hw, TXGBE_PSR_LAN_FLEX_SEL, 0); + for (i = 0; i < 16; i++) { + wr32(hw, TXGBE_PSR_LAN_FLEX_DW_L(i), 0); + wr32(hw, TXGBE_PSR_LAN_FLEX_DW_H(i), 0); + wr32(hw, TXGBE_PSR_LAN_FLEX_MSK(i), 0); + } + + /* set pause frame dst mac addr */ + wr32(hw, TXGBE_RDB_PFCMACDAL, 0xC2000001); + wr32(hw, TXGBE_RDB_PFCMACDAH, 0x0180); + + txgbe_init_thermal_sensor_thresh(hw); + + return 0; +} + +/** + * txgbe_reset_hw - Perform hardware reset + * @hw: pointer to hardware structure + * + * Resets the hardware by resetting the transmit and receive units, masks + * and clears all interrupts, perform a PHY reset, and perform a link (MAC) + * reset. + **/ +s32 txgbe_reset_hw(struct txgbe_hw *hw) +{ + s32 status; + u32 reset = 0; + u32 i; + + u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl, sr_an_mmd_adv_reg2; + u32 vr_xs_or_pcs_mmd_digi_ctl1, curr_vr_xs_or_pcs_mmd_digi_ctl1; + u32 curr_sr_pcs_ctl, curr_sr_pma_mmd_ctl1; + u32 curr_sr_an_mmd_ctl, curr_sr_an_mmd_adv_reg2; + + u32 reset_status = 0; + u32 rst_delay = 0; + struct txgbe_adapter *adapter = hw->back; + u32 value; + + DEBUGFUNC("\n"); + + /* Call adapter stop to disable tx/rx and clear interrupts */ + status = TCALL(hw, mac.ops.stop_adapter); + if (status != 0) + goto reset_hw_out; + + /* Identify PHY and related function pointers */ + status = TCALL(hw, phy.ops.init); + + if (status == TXGBE_ERR_SFP_NOT_SUPPORTED) + goto reset_hw_out; + + /* Reset PHY */ + if (txgbe_get_media_type(hw) == txgbe_media_type_copper) + TCALL(hw, phy.ops.reset); + + /* remember internel phy regs from before we reset */ + curr_sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2); + curr_sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1); + curr_sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL); + curr_sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw, + TXGBE_SR_AN_MMD_ADV_REG2); + curr_vr_xs_or_pcs_mmd_digi_ctl1 = + txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1); + + /* + * Issue global reset to the MAC. Needs to be SW reset if link is up. + * If link reset is used when link is up, it might reset the PHY when + * mng is using it. If link is down or the flag to force full link + * reset is set, then perform link reset. + */ + if (hw->force_full_reset) { + rst_delay = (rd32(hw, TXGBE_MIS_RST_ST) & + TXGBE_MIS_RST_ST_RST_INIT) >> + TXGBE_MIS_RST_ST_RST_INI_SHIFT; + if (hw->reset_type == TXGBE_SW_RESET) { + for (i = 0; i < rst_delay + 20; i++) { + reset_status = + rd32(hw, TXGBE_MIS_RST_ST); + if (!(reset_status & + TXGBE_MIS_RST_ST_DEV_RST_ST_MASK)) + break; + msleep(100); + } + + if (reset_status & TXGBE_MIS_RST_ST_DEV_RST_ST_MASK) { + status = TXGBE_ERR_RESET_FAILED; + DEBUGOUT("Global reset polling failed to " + "complete.\n"); + goto reset_hw_out; + } + status = txgbe_check_flash_load(hw, + TXGBE_SPI_ILDR_STATUS_SW_RESET); + if (status != 0) + goto reset_hw_out; + /* errata 7 */ + if (txgbe_mng_present(hw) && + hw->revision_id == TXGBE_SP_MPW) { + struct txgbe_adapter *adapter = + (struct txgbe_adapter *)hw->back; + adapter->flags2 &= + ~TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED; + } + } else if (hw->reset_type == TXGBE_GLOBAL_RESET) { +#ifndef _WIN32 + struct txgbe_adapter *adapter = + (struct txgbe_adapter *)hw->back; + msleep(100 * rst_delay + 2000); + pci_restore_state(adapter->pdev); + pci_save_state(adapter->pdev); + pci_wake_from_d3(adapter->pdev, false); +#endif /*_WIN32*/ + } + } else { + if (txgbe_mng_present(hw)) { + if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) || + ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) { + txgbe_reset_hostif(hw); + } + } else { + + if (hw->bus.lan_id == 0) { + reset = TXGBE_MIS_RST_LAN0_RST; + } else { + reset = TXGBE_MIS_RST_LAN1_RST; + } + + wr32(hw, TXGBE_MIS_RST, + reset | rd32(hw, TXGBE_MIS_RST)); + TXGBE_WRITE_FLUSH(hw); + } + usec_delay(10); + + if (hw->bus.lan_id == 0) { + status = txgbe_check_flash_load(hw, + TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST); + } else { + status = txgbe_check_flash_load(hw, + TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST); + } + if (status != 0) + goto reset_hw_out; + } + + status = txgbe_reset_misc(hw); + if (status != 0) + goto reset_hw_out; + + /* + * Store the original AUTOC/AUTOC2 values if they have not been + * stored off yet. Otherwise restore the stored original + * values since the reset operation sets back to defaults. + */ + sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2); + sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1); + sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL); + sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG2); + vr_xs_or_pcs_mmd_digi_ctl1 = + txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1); + + if (hw->mac.orig_link_settings_stored == false) { + hw->mac.orig_sr_pcs_ctl2 = sr_pcs_ctl; + hw->mac.orig_sr_pma_mmd_ctl1 = sr_pma_mmd_ctl1; + hw->mac.orig_sr_an_mmd_ctl = sr_an_mmd_ctl; + hw->mac.orig_sr_an_mmd_adv_reg2 = sr_an_mmd_adv_reg2; + hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 = + vr_xs_or_pcs_mmd_digi_ctl1; + hw->mac.orig_link_settings_stored = true; + } else { + + /* If MNG FW is running on a multi-speed device that + * doesn't autoneg with out driver support we need to + * leave LMS in the state it was before we MAC reset. + * Likewise if we support WoL we don't want change the + * LMS state. + */ + + hw->mac.orig_sr_pcs_ctl2 = curr_sr_pcs_ctl; + hw->mac.orig_sr_pma_mmd_ctl1 = curr_sr_pma_mmd_ctl1; + hw->mac.orig_sr_an_mmd_ctl = curr_sr_an_mmd_ctl; + hw->mac.orig_sr_an_mmd_adv_reg2 = + curr_sr_an_mmd_adv_reg2; + hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 = + curr_vr_xs_or_pcs_mmd_digi_ctl1; + + } + + /*A temporary solution for set to sfi*/ + if (SFI_SET == 1 || adapter->ffe_set == TXGBE_BP_M_SFI) { + e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + } + + if (KR_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KR) { + e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + value = (0x1804 & ~0x3F3F); + value |= adapter->ffe_main << 8 | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + + value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + txgbe_wr32_epcs(hw, 0x18035, 0x00FF); + txgbe_wr32_epcs(hw, 0x18055, 0x00FF); + } + + if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) { + e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n", + adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post); + /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN) + * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0); + value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre; + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value); + /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE) + * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 + */ + value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1); + value = (value & ~0x7F) | adapter->ffe_post | (1 << 6); + txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value); + + txgbe_wr32_epcs(hw, 0x18035, 0x00FF); + txgbe_wr32_epcs(hw, 0x18055, 0x00FF); + } + + /* Store the permanent mac address */ + TCALL(hw, mac.ops.get_mac_addr, hw->mac.perm_addr); + + /* + * Store MAC address from RAR0, clear receive address registers, and + * clear the multicast table. Also reset num_rar_entries to 128, + * since we modify this value when programming the SAN MAC address. + */ + hw->mac.num_rar_entries = 128; + TCALL(hw, mac.ops.init_rx_addrs); + + /* Store the permanent SAN mac address */ + TCALL(hw, mac.ops.get_san_mac_addr, hw->mac.san_addr); + + /* Add the SAN MAC address to the RAR only if it's a valid address */ + if (txgbe_validate_mac_addr(hw->mac.san_addr) == 0) { + TCALL(hw, mac.ops.set_rar, hw->mac.num_rar_entries - 1, + hw->mac.san_addr, 0, TXGBE_PSR_MAC_SWC_AD_H_AV); + + /* Save the SAN MAC RAR index */ + hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1; + + /* Reserve the last RAR for the SAN MAC address */ + hw->mac.num_rar_entries--; + } + + /* Store the alternative WWNN/WWPN prefix */ + TCALL(hw, mac.ops.get_wwn_prefix, &hw->mac.wwnn_prefix, + &hw->mac.wwpn_prefix); + + pci_set_master(((struct txgbe_adapter *)hw->back)->pdev); + +reset_hw_out: + return status; +} + +/** + * txgbe_fdir_check_cmd_complete - poll to check whether FDIRCMD is complete + * @hw: pointer to hardware structure + * @fdircmd: current value of FDIRCMD register + */ +STATIC s32 txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, u32 *fdircmd) +{ + int i; + + for (i = 0; i < TXGBE_RDB_FDIR_CMD_CMD_POLL; i++) { + *fdircmd = rd32(hw, TXGBE_RDB_FDIR_CMD); + if (!(*fdircmd & TXGBE_RDB_FDIR_CMD_CMD_MASK)) + return 0; + usec_delay(10); + } + + return TXGBE_ERR_FDIR_CMD_INCOMPLETE; +} + +/** + * txgbe_reinit_fdir_tables - Reinitialize Flow Director tables. + * @hw: pointer to hardware structure + **/ +s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw) +{ + s32 err; + int i; + u32 fdirctrl = rd32(hw, TXGBE_RDB_FDIR_CTL); + u32 fdircmd; + fdirctrl &= ~TXGBE_RDB_FDIR_CTL_INIT_DONE; + + DEBUGFUNC("\n"); + + /* + * Before starting reinitialization process, + * FDIRCMD.CMD must be zero. + */ + err = txgbe_fdir_check_cmd_complete(hw, &fdircmd); + if (err) { + DEBUGOUT("Flow Director previous command did not complete, " + "aborting table re-initialization.\n"); + return err; + } + + wr32(hw, TXGBE_RDB_FDIR_FREE, 0); + TXGBE_WRITE_FLUSH(hw); + /* + * sapphire adapters flow director init flow cannot be restarted, + * Workaround sapphire silicon errata by performing the following steps + * before re-writing the FDIRCTRL control register with the same value. + * - write 1 to bit 8 of FDIRCMD register & + * - write 0 to bit 8 of FDIRCMD register + */ + wr32m(hw, TXGBE_RDB_FDIR_CMD, + TXGBE_RDB_FDIR_CMD_CLEARHT, TXGBE_RDB_FDIR_CMD_CLEARHT); + TXGBE_WRITE_FLUSH(hw); + wr32m(hw, TXGBE_RDB_FDIR_CMD, + TXGBE_RDB_FDIR_CMD_CLEARHT, 0); + TXGBE_WRITE_FLUSH(hw); + /* + * Clear FDIR Hash register to clear any leftover hashes + * waiting to be programmed. + */ + wr32(hw, TXGBE_RDB_FDIR_HASH, 0x00); + TXGBE_WRITE_FLUSH(hw); + + wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl); + TXGBE_WRITE_FLUSH(hw); + + /* Poll init-done after we write FDIRCTRL register */ + for (i = 0; i < TXGBE_FDIR_INIT_DONE_POLL; i++) { + if (rd32(hw, TXGBE_RDB_FDIR_CTL) & + TXGBE_RDB_FDIR_CTL_INIT_DONE) + break; + msec_delay(1); + } + if (i >= TXGBE_FDIR_INIT_DONE_POLL) { + DEBUGOUT("Flow Director Signature poll time exceeded!\n"); + return TXGBE_ERR_FDIR_REINIT_FAILED; + } + + /* Clear FDIR statistics registers (read to clear) */ + rd32(hw, TXGBE_RDB_FDIR_USE_ST); + rd32(hw, TXGBE_RDB_FDIR_FAIL_ST); + rd32(hw, TXGBE_RDB_FDIR_MATCH); + rd32(hw, TXGBE_RDB_FDIR_MISS); + rd32(hw, TXGBE_RDB_FDIR_LEN); + + return 0; +} + +/** + * txgbe_fdir_enable - Initialize Flow Director control registers + * @hw: pointer to hardware structure + * @fdirctrl: value to write to flow director control register + **/ +STATIC void txgbe_fdir_enable(struct txgbe_hw *hw, u32 fdirctrl) +{ + int i; + + DEBUGFUNC("\n"); + + /* Prime the keys for hashing */ + wr32(hw, TXGBE_RDB_FDIR_HKEY, TXGBE_ATR_BUCKET_HASH_KEY); + wr32(hw, TXGBE_RDB_FDIR_SKEY, TXGBE_ATR_SIGNATURE_HASH_KEY); + + /* + * Poll init-done after we write the register. Estimated times: + * 10G: PBALLOC = 11b, timing is 60us + * 1G: PBALLOC = 11b, timing is 600us + * 100M: PBALLOC = 11b, timing is 6ms + * + * Multiple these timings by 4 if under full Rx load + * + * So we'll poll for TXGBE_FDIR_INIT_DONE_POLL times, sleeping for + * 1 msec per poll time. If we're at line rate and drop to 100M, then + * this might not finish in our poll time, but we can live with that + * for now. + */ + wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl); + TXGBE_WRITE_FLUSH(hw); + for (i = 0; i < TXGBE_RDB_FDIR_INIT_DONE_POLL; i++) { + if (rd32(hw, TXGBE_RDB_FDIR_CTL) & + TXGBE_RDB_FDIR_CTL_INIT_DONE) + break; + msec_delay(1); + } + + if (i >= TXGBE_RDB_FDIR_INIT_DONE_POLL) + DEBUGOUT("Flow Director poll time exceeded!\n"); +} + +/** + * txgbe_init_fdir_signature -Initialize Flow Director sig filters + * @hw: pointer to hardware structure + * @fdirctrl: value to write to flow director control register, initially + * contains just the value of the Rx packet buffer allocation + **/ +s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl) +{ + struct txgbe_adapter *adapter = (struct txgbe_adapter *)hw->back; + int i = VMDQ_P(0) / 4; + int j = VMDQ_P(0) % 4; + u32 flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), + ~((TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK | + TXGBE_RDB_FDIR_FLEX_CFG_MSK | + TXGBE_RDB_FDIR_FLEX_CFG_OFST) << + (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j))); + + UNREFERENCED_PARAMETER(adapter); + + flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC | + 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT) << + (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j); + wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), flex); + + /* + * Continue setup of fdirctrl register bits: + * Move the flexible bytes to use the ethertype - shift 6 words + * Set the maximum length per hash bucket to 0xA filters + * Send interrupt when 64 filters are left + */ + fdirctrl |= (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) | + (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) | + (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT); + + /* write hashes and fdirctrl register, poll for completion */ + txgbe_fdir_enable(hw, fdirctrl); + + if (hw->revision_id == TXGBE_SP_MPW) { + /* errata 1: disable RSC of drop ring 0 */ + wr32m(hw, TXGBE_PX_RR_CFG(0), + TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC); + } + return 0; +} + +/** + * txgbe_init_fdir_perfect - Initialize Flow Director perfect filters + * @hw: pointer to hardware structure + * @fdirctrl: value to write to flow director control register, initially + * contains just the value of the Rx packet buffer allocation + * @cloud_mode: true - cloud mode, false - other mode + **/ +s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl, + bool cloud_mode) +{ + UNREFERENCED_PARAMETER(cloud_mode); + DEBUGFUNC("\n"); + + /* + * Continue setup of fdirctrl register bits: + * Turn perfect match filtering on + * Report hash in RSS field of Rx wb descriptor + * Initialize the drop queue + * Move the flexible bytes to use the ethertype - shift 6 words + * Set the maximum length per hash bucket to 0xA filters + * Send interrupt when 64 (0x4 * 16) filters are left + */ + fdirctrl |= TXGBE_RDB_FDIR_CTL_PERFECT_MATCH | + (TXGBE_RDB_FDIR_DROP_QUEUE << + TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT) | + (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) | + (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) | + (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT); + + /* write hashes and fdirctrl register, poll for completion */ + txgbe_fdir_enable(hw, fdirctrl); + + if (hw->revision_id == TXGBE_SP_MPW) { + if (((struct txgbe_adapter *)hw->back)->num_rx_queues > + TXGBE_RDB_FDIR_DROP_QUEUE) + /* errata 1: disable RSC of drop ring */ + wr32m(hw, + TXGBE_PX_RR_CFG(TXGBE_RDB_FDIR_DROP_QUEUE), + TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC); + } + return 0; +} + +/* + * These defines allow us to quickly generate all of the necessary instructions + * in the function below by simply calling out TXGBE_COMPUTE_SIG_HASH_ITERATION + * for values 0 through 15 + */ +#define TXGBE_ATR_COMMON_HASH_KEY \ + (TXGBE_ATR_BUCKET_HASH_KEY & TXGBE_ATR_SIGNATURE_HASH_KEY) +#define TXGBE_COMPUTE_SIG_HASH_ITERATION(_n) \ +do { \ + u32 n = (_n); \ + if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << n)) \ + common_hash ^= lo_hash_dword >> n; \ + else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \ + bucket_hash ^= lo_hash_dword >> n; \ + else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << n)) \ + sig_hash ^= lo_hash_dword << (16 - n); \ + if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << (n + 16))) \ + common_hash ^= hi_hash_dword >> n; \ + else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \ + bucket_hash ^= hi_hash_dword >> n; \ + else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << (n + 16))) \ + sig_hash ^= hi_hash_dword << (16 - n); \ +} while (0) + +/** + * txgbe_atr_compute_sig_hash - Compute the signature hash + * @stream: input bitstream to compute the hash on + * + * This function is almost identical to the function above but contains + * several optimizations such as unwinding all of the loops, letting the + * compiler work out all of the conditional ifs since the keys are static + * defines, and computing two keys at once since the hashed dword stream + * will be the same for both keys. + **/ +u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input, + union txgbe_atr_hash_dword common) +{ + u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan; + u32 sig_hash = 0, bucket_hash = 0, common_hash = 0; + + /* record the flow_vm_vlan bits as they are a key part to the hash */ + flow_vm_vlan = TXGBE_NTOHL(input.dword); + + /* generate common hash dword */ + hi_hash_dword = TXGBE_NTOHL(common.dword); + + /* low dword is word swapped version of common */ + lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16); + + /* apply flow ID/VM pool/VLAN ID bits to hash words */ + hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16); + + /* Process bits 0 and 16 */ + TXGBE_COMPUTE_SIG_HASH_ITERATION(0); + + /* + * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to + * delay this because bit 0 of the stream should not be processed + * so we do not add the VLAN until after bit 0 was processed + */ + lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16); + + /* Process remaining 30 bit of the key */ + TXGBE_COMPUTE_SIG_HASH_ITERATION(1); + TXGBE_COMPUTE_SIG_HASH_ITERATION(2); + TXGBE_COMPUTE_SIG_HASH_ITERATION(3); + TXGBE_COMPUTE_SIG_HASH_ITERATION(4); + TXGBE_COMPUTE_SIG_HASH_ITERATION(5); + TXGBE_COMPUTE_SIG_HASH_ITERATION(6); + TXGBE_COMPUTE_SIG_HASH_ITERATION(7); + TXGBE_COMPUTE_SIG_HASH_ITERATION(8); + TXGBE_COMPUTE_SIG_HASH_ITERATION(9); + TXGBE_COMPUTE_SIG_HASH_ITERATION(10); + TXGBE_COMPUTE_SIG_HASH_ITERATION(11); + TXGBE_COMPUTE_SIG_HASH_ITERATION(12); + TXGBE_COMPUTE_SIG_HASH_ITERATION(13); + TXGBE_COMPUTE_SIG_HASH_ITERATION(14); + TXGBE_COMPUTE_SIG_HASH_ITERATION(15); + + /* combine common_hash result with signature and bucket hashes */ + bucket_hash ^= common_hash; + bucket_hash &= TXGBE_ATR_HASH_MASK; + + sig_hash ^= common_hash << 16; + sig_hash &= TXGBE_ATR_HASH_MASK << 16; + + /* return completed signature hash */ + return sig_hash ^ bucket_hash; +} + +/** + * txgbe_atr_add_signature_filter - Adds a signature hash filter + * @hw: pointer to hardware structure + * @input: unique input dword + * @common: compressed common input dword + * @queue: queue index to direct traffic to + **/ +s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw, + union txgbe_atr_hash_dword input, + union txgbe_atr_hash_dword common, + u8 queue) +{ + u32 fdirhashcmd = 0; + u8 flow_type; + u32 fdircmd; + s32 err; + + DEBUGFUNC("\n"); + + /* + * Get the flow_type in order to program FDIRCMD properly + * lowest 2 bits are FDIRCMD.L4TYPE, third lowest bit is FDIRCMD.IPV6 + * fifth is FDIRCMD.TUNNEL_FILTER + */ + flow_type = input.formatted.flow_type; + switch (flow_type) { + case TXGBE_ATR_FLOW_TYPE_TCPV4: + case TXGBE_ATR_FLOW_TYPE_UDPV4: + case TXGBE_ATR_FLOW_TYPE_SCTPV4: + case TXGBE_ATR_FLOW_TYPE_TCPV6: + case TXGBE_ATR_FLOW_TYPE_UDPV6: + case TXGBE_ATR_FLOW_TYPE_SCTPV6: + break; + default: + DEBUGOUT(" Error on flow type input\n"); + return TXGBE_ERR_CONFIG; + } + + /* configure FDIRCMD register */ + fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW | + TXGBE_RDB_FDIR_CMD_FILTER_UPDATE | + TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN; + fdircmd |= (u32)flow_type << TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT; + fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT; + + fdirhashcmd |= txgbe_atr_compute_sig_hash(input, common); + fdirhashcmd |= 0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT; + wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhashcmd); + + wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd); + + err = txgbe_fdir_check_cmd_complete(hw, &fdircmd); + if (err) { + DEBUGOUT("Flow Director command did not complete!\n"); + return err; + } + + DEBUGOUT2("Tx Queue=%x hash=%x\n", queue, (u32)fdirhashcmd); + + return 0; +} + +#define TXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \ +do { \ + u32 n = (_n); \ + if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \ + bucket_hash ^= lo_hash_dword >> n; \ + if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \ + bucket_hash ^= hi_hash_dword >> n; \ +} while (0) + +/** + * txgbe_atr_compute_perfect_hash - Compute the perfect filter hash + * @atr_input: input bitstream to compute the hash on + * @input_mask: mask for the input bitstream + * + * This function serves two main purposes. First it applies the input_mask + * to the atr_input resulting in a cleaned up atr_input data stream. + * Secondly it computes the hash and stores it in the bkt_hash field at + * the end of the input byte stream. This way it will be available for + * future use without needing to recompute the hash. + **/ +void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input, + union txgbe_atr_input *input_mask) +{ + u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan; + u32 bucket_hash = 0; + u32 hi_dword = 0; + u32 i = 0; + + /* Apply masks to input data */ + for (i = 0; i < 11; i++) + input->dword_stream[i] &= input_mask->dword_stream[i]; + + /* record the flow_vm_vlan bits as they are a key part to the hash */ + flow_vm_vlan = TXGBE_NTOHL(input->dword_stream[0]); + + /* generate common hash dword */ + for (i = 1; i <= 10; i++) + hi_dword ^= input->dword_stream[i]; + hi_hash_dword = TXGBE_NTOHL(hi_dword); + + /* low dword is word swapped version of common */ + lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16); + + /* apply flow ID/VM pool/VLAN ID bits to hash words */ + hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16); + + /* Process bits 0 and 16 */ + TXGBE_COMPUTE_BKT_HASH_ITERATION(0); + + /* + * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to + * delay this because bit 0 of the stream should not be processed + * so we do not add the VLAN until after bit 0 was processed + */ + lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16); + + /* Process remaining 30 bit of the key */ + for (i = 1; i <= 15; i++) + TXGBE_COMPUTE_BKT_HASH_ITERATION(i); + + /* + * Limit hash to 13 bits since max bucket count is 8K. + * Store result at the end of the input stream. + */ + input->formatted.bkt_hash = bucket_hash & 0x1FFF; +} + +/** + * txgbe_get_fdirtcpm - generate a TCP port from atr_input_masks + * @input_mask: mask to be bit swapped + * + * The source and destination port masks for flow director are bit swapped + * in that bit 15 effects bit 0, 14 effects 1, 13, 2 etc. In order to + * generate a correctly swapped value we need to bit swap the mask and that + * is what is accomplished by this function. + **/ +STATIC u32 txgbe_get_fdirtcpm(union txgbe_atr_input *input_mask) +{ + u32 mask = TXGBE_NTOHS(input_mask->formatted.dst_port); + mask <<= TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT; + mask |= TXGBE_NTOHS(input_mask->formatted.src_port); + mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1); + mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2); + mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4); + return ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8); +} + +/* + * These two macros are meant to address the fact that we have registers + * that are either all or in part big-endian. As a result on big-endian + * systems we will end up byte swapping the value to little-endian before + * it is byte swapped again and written to the hardware in the original + * big-endian format. + */ +#define TXGBE_STORE_AS_BE32(_value) \ + (((u32)(_value) >> 24) | (((u32)(_value) & 0x00FF0000) >> 8) | \ + (((u32)(_value) & 0x0000FF00) << 8) | ((u32)(_value) << 24)) + +#define TXGBE_WRITE_REG_BE32(a, reg, value) \ + wr32((a), (reg), TXGBE_STORE_AS_BE32(TXGBE_NTOHL(value))) + +#define TXGBE_STORE_AS_BE16(_value) \ + TXGBE_NTOHS(((u16)(_value) >> 8) | ((u16)(_value) << 8)) + +s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw, + union txgbe_atr_input *input_mask, + bool cloud_mode) +{ + /* mask IPv6 since it is currently not supported */ + u32 fdirm = 0; + u32 fdirtcpm; + u32 flex = 0; + int i, j; + struct txgbe_adapter *adapter = (struct txgbe_adapter *)hw->back; + + UNREFERENCED_PARAMETER(cloud_mode); + UNREFERENCED_PARAMETER(adapter); + + DEBUGFUNC("\n"); + + /* + * Program the relevant mask registers. If src/dst_port or src/dst_addr + * are zero, then assume a full mask for that field. Also assume that + * a VLAN of 0 is unspecified, so mask that out as well. L4type + * cannot be masked out in this implementation. + * + * This also assumes IPv4 only. IPv6 masking isn't supported at this + * point in time. + */ + + /* verify bucket hash is cleared on hash generation */ + if (input_mask->formatted.bkt_hash) + DEBUGOUT(" bucket hash should always be 0 in mask\n"); + + /* Program FDIRM and verify partial masks */ + switch (input_mask->formatted.vm_pool & 0x7F) { + case 0x0: + fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_POOL; + case 0x7F: + break; + default: + DEBUGOUT(" Error on vm pool mask\n"); + return TXGBE_ERR_CONFIG; + } + + switch (input_mask->formatted.flow_type & TXGBE_ATR_L4TYPE_MASK) { + case 0x0: + fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_L4P; + if (input_mask->formatted.dst_port || + input_mask->formatted.src_port) { + DEBUGOUT(" Error on src/dst port mask\n"); + return TXGBE_ERR_CONFIG; + } + case TXGBE_ATR_L4TYPE_MASK: + break; + default: + DEBUGOUT(" Error on flow type mask\n"); + return TXGBE_ERR_CONFIG; + } + + /* Now mask VM pool and destination IPv6 - bits 5 and 2 */ + wr32(hw, TXGBE_RDB_FDIR_OTHER_MSK, fdirm); + + i = VMDQ_P(0) / 4; + j = VMDQ_P(0) % 4; + flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), + ~((TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK | + TXGBE_RDB_FDIR_FLEX_CFG_MSK | + TXGBE_RDB_FDIR_FLEX_CFG_OFST) << + (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j))); + flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC | + 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT) << + (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j); + + switch (input_mask->formatted.flex_bytes & 0xFFFF) { + case 0x0000: + /* Mask Flex Bytes, fall through */ + flex |= TXGBE_RDB_FDIR_FLEX_CFG_MSK << + (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j); + case 0xFFFF: + break; + default: + DEBUGOUT(" Error on flexible byte mask\n"); + return TXGBE_ERR_CONFIG; + } + wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), flex); + + /* store the TCP/UDP port masks, bit reversed from port + * layout */ + fdirtcpm = txgbe_get_fdirtcpm(input_mask); + + /* write both the same so that UDP and TCP use the same mask */ + wr32(hw, TXGBE_RDB_FDIR_TCP_MSK, ~fdirtcpm); + wr32(hw, TXGBE_RDB_FDIR_UDP_MSK, ~fdirtcpm); + wr32(hw, TXGBE_RDB_FDIR_SCTP_MSK, ~fdirtcpm); + + /* store source and destination IP masks (little-enian) */ + wr32(hw, TXGBE_RDB_FDIR_SA4_MSK, + TXGBE_NTOHL(~input_mask->formatted.src_ip[0])); + wr32(hw, TXGBE_RDB_FDIR_DA4_MSK, + TXGBE_NTOHL(~input_mask->formatted.dst_ip[0])); + return 0; +} + +s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw, + union txgbe_atr_input *input, + u16 soft_id, u8 queue, + bool cloud_mode) +{ + u32 fdirport, fdirvlan, fdirhash, fdircmd; + s32 err; + + DEBUGFUNC("\n"); + if (!cloud_mode) { + /* currently IPv6 is not supported, must be programmed with 0 */ + wr32(hw, TXGBE_RDB_FDIR_IP6(2), + TXGBE_NTOHL(input->formatted.src_ip[0])); + wr32(hw, TXGBE_RDB_FDIR_IP6(1), + TXGBE_NTOHL(input->formatted.src_ip[1])); + wr32(hw, TXGBE_RDB_FDIR_IP6(0), + TXGBE_NTOHL(input->formatted.src_ip[2])); + + /* record the source address (little-endian) */ + wr32(hw, TXGBE_RDB_FDIR_SA, + TXGBE_NTOHL(input->formatted.src_ip[0])); + + /* record the first 32 bits of the destination address + * (little-endian) */ + wr32(hw, TXGBE_RDB_FDIR_DA, + TXGBE_NTOHL(input->formatted.dst_ip[0])); + + /* record source and destination port (little-endian)*/ + fdirport = TXGBE_NTOHS(input->formatted.dst_port); + fdirport <<= TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT; + fdirport |= TXGBE_NTOHS(input->formatted.src_port); + wr32(hw, TXGBE_RDB_FDIR_PORT, fdirport); + } + + /* record packet type and flex_bytes(little-endian) */ + fdirvlan = TXGBE_NTOHS(input->formatted.flex_bytes); + fdirvlan <<= TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT; + + fdirvlan |= TXGBE_NTOHS(input->formatted.vlan_id); + wr32(hw, TXGBE_RDB_FDIR_FLEX, fdirvlan); + + + /* configure FDIRHASH register */ + fdirhash = input->formatted.bkt_hash | + 0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT; + fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT; + wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash); + + /* + * flush all previous writes to make certain registers are + * programmed prior to issuing the command + */ + TXGBE_WRITE_FLUSH(hw); + + /* configure FDIRCMD register */ + fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW | + TXGBE_RDB_FDIR_CMD_FILTER_UPDATE | + TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN; + if (queue == TXGBE_RDB_FDIR_DROP_QUEUE) + fdircmd |= TXGBE_RDB_FDIR_CMD_DROP; + fdircmd |= input->formatted.flow_type << + TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT; + fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT; + fdircmd |= (u32)input->formatted.vm_pool << + TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT; + + wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd); + err = txgbe_fdir_check_cmd_complete(hw, &fdircmd); + if (err) { + DEBUGOUT("Flow Director command did not complete!\n"); + return err; + } + + return 0; +} + +s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw, + union txgbe_atr_input *input, + u16 soft_id) +{ + u32 fdirhash; + u32 fdircmd; + s32 err; + + /* configure FDIRHASH register */ + fdirhash = input->formatted.bkt_hash; + fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT; + wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash); + + /* flush hash to HW */ + TXGBE_WRITE_FLUSH(hw); + + /* Query if filter is present */ + wr32(hw, TXGBE_RDB_FDIR_CMD, + TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT); + + err = txgbe_fdir_check_cmd_complete(hw, &fdircmd); + if (err) { + DEBUGOUT("Flow Director command did not complete!\n"); + return err; + } + + /* if filter exists in hardware then remove it */ + if (fdircmd & TXGBE_RDB_FDIR_CMD_FILTER_VALID) { + wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash); + TXGBE_WRITE_FLUSH(hw); + wr32(hw, TXGBE_RDB_FDIR_CMD, + TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW); + } + + return 0; +} + + +/** + * txgbe_start_hw - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * Starts the hardware using the generic start_hw function + * and the generation start_hw function. + * Then performs revision-specific operations, if any. + **/ +s32 txgbe_start_hw(struct txgbe_hw *hw) +{ + int ret_val = 0; + u32 i; + + DEBUGFUNC("\n"); + + /* Set the media type */ + hw->phy.media_type = TCALL(hw, mac.ops.get_media_type); + + /* PHY ops initialization must be done in reset_hw() */ + + /* Clear the VLAN filter table */ + TCALL(hw, mac.ops.clear_vfta); + + /* Clear statistics registers */ + TCALL(hw, mac.ops.clear_hw_cntrs); + + TXGBE_WRITE_FLUSH(hw); + + /* Setup flow control */ + ret_val = TCALL(hw, mac.ops.setup_fc); + + /* Clear the rate limiters */ + for (i = 0; i < hw->mac.max_tx_queues; i++) { + wr32(hw, TXGBE_TDM_RP_IDX, i); + wr32(hw, TXGBE_TDM_RP_RATE, 0); + } + TXGBE_WRITE_FLUSH(hw); + + /* Clear adapter stopped flag */ + hw->adapter_stopped = false; + + /* We need to run link autotry after the driver loads */ + hw->mac.autotry_restart = true; + + return ret_val; +} + +/** + * txgbe_identify_phy - Get physical layer module + * @hw: pointer to hardware structure + * + * Determines the physical layer module found on the current adapter. + * If PHY already detected, maintains current PHY type in hw struct, + * otherwise executes the PHY detection routine. + **/ +s32 txgbe_identify_phy(struct txgbe_hw *hw) +{ + /* Detect PHY if not unknown - returns success if already detected. */ + s32 status = TXGBE_ERR_PHY_ADDR_INVALID; + enum txgbe_media_type media_type; + + DEBUGFUNC("\n"); + + if (!hw->phy.phy_semaphore_mask) { + hw->phy.phy_semaphore_mask = TXGBE_MNG_SWFW_SYNC_SW_PHY; + } + + media_type = TCALL(hw, mac.ops.get_media_type); + if (media_type == txgbe_media_type_copper) { + status = txgbe_init_external_phy(hw); + if (status != 0) { + return status; + } + txgbe_get_phy_id(hw); + hw->phy.type = txgbe_get_phy_type_from_id(hw); + status = 0; + } else if (media_type == txgbe_media_type_fiber) { + status = txgbe_identify_module(hw); + } else { + hw->phy.type = txgbe_phy_none; + status = 0; + } + + /* Return error if SFP module has been detected but is not supported */ + if (hw->phy.type == txgbe_phy_sfp_unsupported) + return TXGBE_ERR_SFP_NOT_SUPPORTED; + + return status; +} + + +/** + * txgbe_enable_rx_dma - Enable the Rx DMA unit on sapphire + * @hw: pointer to hardware structure + * @regval: register value to write to RXCTRL + * + * Enables the Rx DMA unit for sapphire + **/ +s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval) +{ + + DEBUGFUNC("\n"); + + /* + * Workaround for sapphire silicon errata when enabling the Rx datapath. + * If traffic is incoming before we enable the Rx unit, it could hang + * the Rx DMA unit. Therefore, make sure the security engine is + * completely disabled prior to enabling the Rx unit. + */ + + TCALL(hw, mac.ops.disable_sec_rx_path); + + if (regval & TXGBE_RDB_PB_CTL_RXEN) + TCALL(hw, mac.ops.enable_rx); + else + TCALL(hw, mac.ops.disable_rx); + + TCALL(hw, mac.ops.enable_sec_rx_path); + + return 0; +} + +/** + * txgbe_init_flash_params - Initialize flash params + * @hw: pointer to hardware structure + * + * Initializes the EEPROM parameters txgbe_eeprom_info within the + * txgbe_hw struct in order to set up EEPROM access. + **/ +s32 txgbe_init_flash_params(struct txgbe_hw *hw) +{ + struct txgbe_flash_info *flash = &hw->flash; + u32 eec; + + DEBUGFUNC("\n"); + + eec = 0x1000000; + flash->semaphore_delay = 10; + flash->dword_size = (eec >> 2); + flash->address_bits = 24; + DEBUGOUT3("FLASH params: size = %d, address bits: %d\n", + flash->dword_size, + flash->address_bits); + + return 0; +} + +/** + * txgbe_read_flash_buffer - Read FLASH dword(s) using + * fastest available method + * + * @hw: pointer to hardware structure + * @offset: offset of dword in EEPROM to read + * @dwords: number of dwords + * @data: dword(s) read from the EEPROM + * + * Retrieves 32 bit dword(s) read from EEPROM + **/ +s32 txgbe_read_flash_buffer(struct txgbe_hw *hw, u32 offset, + u32 dwords, u32 *data) +{ + s32 status = 0; + u32 i; + + DEBUGFUNC("\n"); + + TCALL(hw, eeprom.ops.init_params); + + if (!dwords || offset + dwords >= hw->flash.dword_size) { + status = TXGBE_ERR_INVALID_ARGUMENT; + ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, "Invalid FLASH arguments"); + return status; + } + + for (i = 0; i < dwords; i++) { + wr32(hw, TXGBE_SPI_DATA, data[i]); + wr32(hw, TXGBE_SPI_CMD, + TXGBE_SPI_CMD_ADDR(offset + i) | + TXGBE_SPI_CMD_CMD(0x0)); + + status = po32m(hw, TXGBE_SPI_STATUS, + TXGBE_SPI_STATUS_OPDONE, TXGBE_SPI_STATUS_OPDONE, + TXGBE_SPI_TIMEOUT, 0); + if (status) { + DEBUGOUT("FLASH read timed out\n"); + break; + } + } + + return status; +} + +/** + * txgbe_write_flash_buffer - Write FLASH dword(s) using + * fastest available method + * + * @hw: pointer to hardware structure + * @offset: offset of dword in EEPROM to write + * @dwords: number of dwords + * @data: dword(s) write from to EEPROM + * + **/ +s32 txgbe_write_flash_buffer(struct txgbe_hw *hw, u32 offset, + u32 dwords, u32 *data) +{ + s32 status = 0; + u32 i; + + DEBUGFUNC("\n"); + + TCALL(hw, eeprom.ops.init_params); + + if (!dwords || offset + dwords >= hw->flash.dword_size) { + status = TXGBE_ERR_INVALID_ARGUMENT; + ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, "Invalid FLASH arguments"); + return status; + } + + for (i = 0; i < dwords; i++) { + wr32(hw, TXGBE_SPI_CMD, + TXGBE_SPI_CMD_ADDR(offset + i) | + TXGBE_SPI_CMD_CMD(0x1)); + + status = po32m(hw, TXGBE_SPI_STATUS, + TXGBE_SPI_STATUS_OPDONE, TXGBE_SPI_STATUS_OPDONE, + TXGBE_SPI_TIMEOUT, 0); + if (status != 0) { + DEBUGOUT("FLASH write timed out\n"); + break; + } + data[i] = rd32(hw, TXGBE_SPI_DATA); + } + + return status; +} + +/** + * txgbe_init_eeprom_params - Initialize EEPROM params + * @hw: pointer to hardware structure + * + * Initializes the EEPROM parameters txgbe_eeprom_info within the + * txgbe_hw struct in order to set up EEPROM access. + **/ +s32 txgbe_init_eeprom_params(struct txgbe_hw *hw) +{ + struct txgbe_eeprom_info *eeprom = &hw->eeprom; + u16 eeprom_size; + s32 status = 0; + u16 data; + + DEBUGFUNC("\n"); + + if (eeprom->type == txgbe_eeprom_uninitialized) { + eeprom->semaphore_delay = 10; + eeprom->type = txgbe_eeprom_none; + + if (!(rd32(hw, TXGBE_SPI_STATUS) & + TXGBE_SPI_STATUS_FLASH_BYPASS)) { + eeprom->type = txgbe_flash; + + eeprom_size = 4096; + eeprom->word_size = eeprom_size >> 1; + + DEBUGOUT2("Eeprom params: type = %d, size = %d\n", + eeprom->type, eeprom->word_size); + } + } + + status = TCALL(hw, eeprom.ops.read, TXGBE_SW_REGION_PTR, + &data); + if (status) { + DEBUGOUT("NVM Read Error\n"); + return status; + } + eeprom->sw_region_offset = data >> 1; + + return status; +} + +/** + * txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd + * assuming that the semaphore is already obtained. + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @data: word read from the EEPROM + * + * Reads a 16 bit word from the EEPROM using the hostif. + **/ +s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset, + u16 *data) +{ + s32 status; + struct txgbe_hic_read_shadow_ram buffer; + + DEBUGFUNC("\n"); + buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN; + buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + + /* convert offset from words to bytes */ + buffer.address = TXGBE_CPU_TO_BE32(offset * 2); + /* one word */ + buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16)); + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, false); + + if (status) + return status; + if (txgbe_check_mng_access(hw)) + *data = (u16)rd32a(hw, TXGBE_MNG_MBOX, + FW_NVM_DATA_OFFSET); + else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + return status; + } + + return 0; +} + +/** + * txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @data: word read from the EEPROM + * + * Reads a 16 bit word from the EEPROM using the hostif. + **/ +s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset, + u16 *data) +{ + s32 status = 0; + + DEBUGFUNC("\n"); + + if (TCALL(hw, mac.ops.acquire_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) { + status = txgbe_read_ee_hostif_data(hw, offset, data); + TCALL(hw, mac.ops.release_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH); + } else { + status = TXGBE_ERR_SWFW_SYNC; + } + + return status; +} + +/** + * txgbe_read_ee_hostif_buffer- Read EEPROM word(s) using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @words: number of words + * @data: word(s) read from the EEPROM + * + * Reads a 16 bit word(s) from the EEPROM using the hostif. + **/ +s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw, + u16 offset, u16 words, u16 *data) +{ + struct txgbe_hic_read_shadow_ram buffer; + u32 current_word = 0; + u16 words_to_read; + s32 status; + u32 i; + u32 value = 0; + + DEBUGFUNC("\n"); + + /* Take semaphore for the entire operation. */ + status = TCALL(hw, mac.ops.acquire_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH); + if (status) { + DEBUGOUT("EEPROM read buffer - semaphore failed\n"); + return status; + } + while (words) { + if (words > FW_MAX_READ_BUFFER_SIZE / 2) + words_to_read = FW_MAX_READ_BUFFER_SIZE / 2; + else + words_to_read = words; + + buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN; + buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + + /* convert offset from words to bytes */ + buffer.address = TXGBE_CPU_TO_BE32((offset + current_word) * 2); + buffer.length = TXGBE_CPU_TO_BE16(words_to_read * 2); + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, + false); + + if (status) { + DEBUGOUT("Host interface command failed\n"); + goto out; + } + + for (i = 0; i < words_to_read; i++) { + u32 reg = TXGBE_MNG_MBOX + (FW_NVM_DATA_OFFSET << 2) + + 2 * i; + if (txgbe_check_mng_access(hw)) + value = rd32(hw, reg); + else { + status = TXGBE_ERR_MNG_ACCESS_FAILED; + return status; + } + data[current_word] = (u16)(value & 0xffff); + current_word++; + i++; + if (i < words_to_read) { + value >>= 16; + data[current_word] = (u16)(value & 0xffff); + current_word++; + } + } + words -= words_to_read; + } + +out: + TCALL(hw, mac.ops.release_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH); + return status; +} + +/** + * txgbe_write_ee_hostif - Write EEPROM word using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to write + * @data: word write to the EEPROM + * + * Write a 16 bit word to the EEPROM using the hostif. + **/ +s32 txgbe_write_ee_hostif_data(struct txgbe_hw *hw, u16 offset, + u16 data) +{ + s32 status; + struct txgbe_hic_write_shadow_ram buffer; + + DEBUGFUNC("\n"); + + buffer.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD; + buffer.hdr.req.buf_lenh = 0; + buffer.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN; + buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + + /* one word */ + buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16)); + buffer.data = data; + buffer.address = TXGBE_CPU_TO_BE32(offset * 2); + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, false); + + return status; +} + +/** + * txgbe_write_ee_hostif - Write EEPROM word using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to write + * @data: word write to the EEPROM + * + * Write a 16 bit word to the EEPROM using the hostif. + **/ +s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset, + u16 data) +{ + s32 status = 0; + + DEBUGFUNC("\n"); + + if (TCALL(hw, mac.ops.acquire_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) { + status = txgbe_write_ee_hostif_data(hw, offset, data); + TCALL(hw, mac.ops.release_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH); + } else { + DEBUGOUT("write ee hostif failed to get semaphore"); + status = TXGBE_ERR_SWFW_SYNC; + } + + return status; +} + +/** + * txgbe_write_ee_hostif_buffer - Write EEPROM word(s) using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to write + * @words: number of words + * @data: word(s) write to the EEPROM + * + * Write a 16 bit word(s) to the EEPROM using the hostif. + **/ +s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw, + u16 offset, u16 words, u16 *data) +{ + s32 status = 0; + u16 i = 0; + + DEBUGFUNC("\n"); + + /* Take semaphore for the entire operation. */ + status = TCALL(hw, mac.ops.acquire_swfw_sync, + TXGBE_MNG_SWFW_SYNC_SW_FLASH); + if (status != 0) { + DEBUGOUT("EEPROM write buffer - semaphore failed\n"); + goto out; + } + + for (i = 0; i < words; i++) { + status = txgbe_write_ee_hostif_data(hw, offset + i, + data[i]); + + if (status != 0) { + DEBUGOUT("Eeprom buffered write failed\n"); + break; + } + } + + TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH); +out: + + return status; +} + + + +/** + * txgbe_calc_eeprom_checksum - Calculates and returns the checksum + * @hw: pointer to hardware structure + * + * Returns a negative error code on error, or the 16-bit checksum + **/ +s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw) +{ + u16 *buffer = NULL; + u32 buffer_size = 0; + + u16 *eeprom_ptrs = NULL; + u16 *local_buffer; + s32 status; + u16 checksum = 0; + u16 i; + + DEBUGFUNC("\n"); + + TCALL(hw, eeprom.ops.init_params); + + if (!buffer) { + eeprom_ptrs = (u16 *)vmalloc(TXGBE_EEPROM_LAST_WORD * + sizeof(u16)); + if (!eeprom_ptrs) + return TXGBE_ERR_NO_SPACE; + /* Read pointer area */ + status = txgbe_read_ee_hostif_buffer(hw, 0, + TXGBE_EEPROM_LAST_WORD, + eeprom_ptrs); + if (status) { + DEBUGOUT("Failed to read EEPROM image\n"); + return status; + } + local_buffer = eeprom_ptrs; + } else { + if (buffer_size < TXGBE_EEPROM_LAST_WORD) + return TXGBE_ERR_PARAM; + local_buffer = buffer; + } + + for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++) + if (i != hw->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM) + checksum += local_buffer[i]; + + checksum = (u16)TXGBE_EEPROM_SUM - checksum; + if (eeprom_ptrs) + vfree(eeprom_ptrs); + + return (s32)checksum; +} + +/** + * txgbe_update_eeprom_checksum - Updates the EEPROM checksum and flash + * @hw: pointer to hardware structure + * + * After writing EEPROM to shadow RAM using EEWR register, software calculates + * checksum and updates the EEPROM and instructs the hardware to update + * the flash. + **/ +s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw) +{ + s32 status; + u16 checksum = 0; + + DEBUGFUNC("\n"); + + /* Read the first word from the EEPROM. If this times out or fails, do + * not continue or we could be in for a very long wait while every + * EEPROM read fails + */ + status = txgbe_read_ee_hostif(hw, 0, &checksum); + if (status) { + DEBUGOUT("EEPROM read failed\n"); + return status; + } + + status = txgbe_calc_eeprom_checksum(hw); + if (status < 0) + return status; + + checksum = (u16)(status & 0xffff); + + status = txgbe_write_ee_hostif(hw, TXGBE_EEPROM_CHECKSUM, + checksum); + if (status) + return status; + + return status; +} + +/** + * txgbe_validate_eeprom_checksum - Validate EEPROM checksum + * @hw: pointer to hardware structure + * @checksum_val: calculated checksum + * + * Performs checksum calculation and validates the EEPROM checksum. If the + * caller does not need checksum_val, the value can be NULL. + **/ +s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, + u16 *checksum_val) +{ + s32 status; + u16 checksum; + u16 read_checksum = 0; + + DEBUGFUNC("\n"); + + /* Read the first word from the EEPROM. If this times out or fails, do + * not continue or we could be in for a very long wait while every + * EEPROM read fails + */ + status = TCALL(hw, eeprom.ops.read, 0, &checksum); + if (status) { + DEBUGOUT("EEPROM read failed\n"); + return status; + } + + status = TCALL(hw, eeprom.ops.calc_checksum); + if (status < 0) + return status; + + checksum = (u16)(status & 0xffff); + + status = txgbe_read_ee_hostif(hw, hw->eeprom.sw_region_offset + + TXGBE_EEPROM_CHECKSUM, + &read_checksum); + if (status) + return status; + + /* Verify read checksum from EEPROM is the same as + * calculated checksum + */ + if (read_checksum != checksum) { + status = TXGBE_ERR_EEPROM_CHECKSUM; + ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE, + "Invalid EEPROM checksum\n"); + } + + /* If the user cares, return the calculated checksum */ + if (checksum_val) + *checksum_val = checksum; + + return status; +} + +/** + * txgbe_update_flash - Instruct HW to copy EEPROM to Flash device + * @hw: pointer to hardware structure + * + * Issue a shadow RAM dump to FW to copy EEPROM from shadow RAM to the flash. + **/ +s32 txgbe_update_flash(struct txgbe_hw *hw) +{ + s32 status = 0; + union txgbe_hic_hdr2 buffer; + + DEBUGFUNC("\n"); + + buffer.req.cmd = FW_SHADOW_RAM_DUMP_CMD; + buffer.req.buf_lenh = 0; + buffer.req.buf_lenl = FW_SHADOW_RAM_DUMP_LEN; + buffer.req.checksum = FW_DEFAULT_CHECKSUM; + + status = txgbe_host_interface_command(hw, (u32 *)&buffer, + sizeof(buffer), + TXGBE_HI_COMMAND_TIMEOUT, false); + + return status; +} + + +/** + * txgbe_check_mac_link - Determine link and speed status + * @hw: pointer to hardware structure + * @speed: pointer to link speed + * @link_up: true when link is up + * @link_up_wait_to_complete: bool used to wait for link up or not + * + * Reads the links register to determine if link is up and the current speed + **/ +s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed, + bool *link_up, bool link_up_wait_to_complete) +{ + u32 links_reg = 0; + u32 i; + u16 value; + + DEBUGFUNC("\n"); + + if (link_up_wait_to_complete) { + for (i = 0; i < TXGBE_LINK_UP_TIME; i++) { + if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && + ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + /* read ext phy link status */ + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value); + if (value & 0x400) { + *link_up = true; + } else { + *link_up = false; + } + } else { + *link_up = true; + } + if (*link_up) { + links_reg = rd32(hw, + TXGBE_CFG_PORT_ST); + if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) { + *link_up = true; + break; + } else { + *link_up = false; + } + } + msleep(100); + } + } else { + if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && + ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + /* read ext phy link status */ + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value); + if (value & 0x400) { + *link_up = true; + } else { + *link_up = false; + } + } else { + *link_up = true; + } + if (*link_up) { + links_reg = rd32(hw, TXGBE_CFG_PORT_ST); + if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) { + *link_up = true; + } else { + *link_up = false; + } + } + } + + if (*link_up) { + if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper && + ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) { + if ((value & 0xc000) == 0xc000) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + } else if ((value & 0xc000) == 0x8000) { + *speed = TXGBE_LINK_SPEED_1GB_FULL; + } else if ((value & 0xc000) == 0x4000) { + *speed = TXGBE_LINK_SPEED_100_FULL; + } else if ((value & 0xc000) == 0x0000) { + *speed = TXGBE_LINK_SPEED_10_FULL; + } + } else { + if ((links_reg & TXGBE_CFG_PORT_ST_LINK_10G) == + TXGBE_CFG_PORT_ST_LINK_10G) { + *speed = TXGBE_LINK_SPEED_10GB_FULL; + } else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_1G) == + TXGBE_CFG_PORT_ST_LINK_1G){ + *speed = TXGBE_LINK_SPEED_1GB_FULL; + } else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_100M) == + TXGBE_CFG_PORT_ST_LINK_100M){ + *speed = TXGBE_LINK_SPEED_100_FULL; + } else + *speed = TXGBE_LINK_SPEED_10_FULL; + } + } else + *speed = TXGBE_LINK_SPEED_UNKNOWN; + + return 0; +} + +/** + * txgbe_setup_eee - Enable/disable EEE support + * @hw: pointer to the HW structure + * @enable_eee: boolean flag to enable EEE + * + * Enable/disable EEE based on enable_eee flag. + * Auto-negotiation must be started after BASE-T EEE bits in PHY register 7.3C + * are modified. + * + **/ +s32 txgbe_setup_eee(struct txgbe_hw *hw, bool enable_eee) +{ + /* fix eee */ + UNREFERENCED_PARAMETER(hw); + UNREFERENCED_PARAMETER(enable_eee); + DEBUGFUNC("\n"); + + return 0; +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h new file mode 100644 index 000000000000..97ce62a2cd26 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h @@ -0,0 +1,264 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + +#ifndef _TXGBE_HW_H_ +#define _TXGBE_HW_H_ + +#define TXGBE_EMC_INTERNAL_DATA 0x00 +#define TXGBE_EMC_INTERNAL_THERM_LIMIT 0x20 +#define TXGBE_EMC_DIODE1_DATA 0x01 +#define TXGBE_EMC_DIODE1_THERM_LIMIT 0x19 +#define TXGBE_EMC_DIODE2_DATA 0x23 +#define TXGBE_EMC_DIODE2_THERM_LIMIT 0x1A +#define TXGBE_EMC_DIODE3_DATA 0x2A +#define TXGBE_EMC_DIODE3_THERM_LIMIT 0x30 + +/** + * Packet Type decoding + **/ +/* txgbe_dec_ptype.mac: outer mac */ +enum txgbe_dec_ptype_mac { + TXGBE_DEC_PTYPE_MAC_IP = 0, + TXGBE_DEC_PTYPE_MAC_L2 = 2, + TXGBE_DEC_PTYPE_MAC_FCOE = 3, +}; + +/* txgbe_dec_ptype.[e]ip: outer&encaped ip */ +#define TXGBE_DEC_PTYPE_IP_FRAG (0x4) +enum txgbe_dec_ptype_ip { + TXGBE_DEC_PTYPE_IP_NONE = 0, + TXGBE_DEC_PTYPE_IP_IPV4 = 1, + TXGBE_DEC_PTYPE_IP_IPV6 = 2, + TXGBE_DEC_PTYPE_IP_FGV4 = + (TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV4), + TXGBE_DEC_PTYPE_IP_FGV6 = + (TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV6), +}; + +/* txgbe_dec_ptype.etype: encaped type */ +enum txgbe_dec_ptype_etype { + TXGBE_DEC_PTYPE_ETYPE_NONE = 0, + TXGBE_DEC_PTYPE_ETYPE_IPIP = 1, /* IP+IP */ + TXGBE_DEC_PTYPE_ETYPE_IG = 2, /* IP+GRE */ + TXGBE_DEC_PTYPE_ETYPE_IGM = 3, /* IP+GRE+MAC */ + TXGBE_DEC_PTYPE_ETYPE_IGMV = 4, /* IP+GRE+MAC+VLAN */ +}; + +/* txgbe_dec_ptype.proto: payload proto */ +enum txgbe_dec_ptype_prot { + TXGBE_DEC_PTYPE_PROT_NONE = 0, + TXGBE_DEC_PTYPE_PROT_UDP = 1, + TXGBE_DEC_PTYPE_PROT_TCP = 2, + TXGBE_DEC_PTYPE_PROT_SCTP = 3, + TXGBE_DEC_PTYPE_PROT_ICMP = 4, + TXGBE_DEC_PTYPE_PROT_TS = 5, /* time sync */ +}; + +/* txgbe_dec_ptype.layer: payload layer */ +enum txgbe_dec_ptype_layer { + TXGBE_DEC_PTYPE_LAYER_NONE = 0, + TXGBE_DEC_PTYPE_LAYER_PAY2 = 1, + TXGBE_DEC_PTYPE_LAYER_PAY3 = 2, + TXGBE_DEC_PTYPE_LAYER_PAY4 = 3, +}; + +struct txgbe_dec_ptype { + u32 ptype:8; + u32 known:1; + u32 mac:2; /* outer mac */ + u32 ip:3; /* outer ip*/ + u32 etype:3; /* encaped type */ + u32 eip:3; /* encaped ip */ + u32 prot:4; /* payload proto */ + u32 layer:3; /* payload layer */ +}; +typedef struct txgbe_dec_ptype txgbe_dptype; + + +void txgbe_dcb_get_rtrup2tc(struct txgbe_hw *hw, u8 *map); +u16 txgbe_get_pcie_msix_count(struct txgbe_hw *hw); +s32 txgbe_init_hw(struct txgbe_hw *hw); +s32 txgbe_start_hw(struct txgbe_hw *hw); +s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw); +s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num, + u32 pba_num_size); +s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr); +s32 txgbe_get_bus_info(struct txgbe_hw *hw); +void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status); +void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw); +s32 txgbe_stop_adapter(struct txgbe_hw *hw); + +s32 txgbe_led_on(struct txgbe_hw *hw, u32 index); +s32 txgbe_led_off(struct txgbe_hw *hw, u32 index); + +s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools, + u32 enable_addr); +s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index); +s32 txgbe_init_rx_addrs(struct txgbe_hw *hw); +s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, + txgbe_mc_addr_itr func, bool clear); +s32 txgbe_update_uc_addr_list(struct txgbe_hw *hw, u8 *addr_list, + u32 addr_count, txgbe_mc_addr_itr func); +s32 txgbe_enable_mc(struct txgbe_hw *hw); +s32 txgbe_disable_mc(struct txgbe_hw *hw); +s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw); +s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw); + +s32 txgbe_fc_enable(struct txgbe_hw *hw); +bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw); +void txgbe_fc_autoneg(struct txgbe_hw *hw); +s32 txgbe_setup_fc(struct txgbe_hw *hw); + +s32 txgbe_validate_mac_addr(u8 *mac_addr); +s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask); +void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask); +s32 txgbe_disable_pcie_master(struct txgbe_hw *hw); + + +s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr); +s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr); + +s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq); +s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq); +s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq); +s32 txgbe_insert_mac_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq); +s32 txgbe_init_uta_tables(struct txgbe_hw *hw); +s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan, + u32 vind, bool vlan_on); +s32 txgbe_set_vlvf(struct txgbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, bool *vfta_changed); +s32 txgbe_clear_vfta(struct txgbe_hw *hw); +s32 txgbe_find_vlvf_slot(struct txgbe_hw *hw, u32 vlan); + +s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix, + u16 *wwpn_prefix); + +void txgbe_set_mac_anti_spoofing(struct txgbe_hw *hw, bool enable, int pf); +void txgbe_set_vlan_anti_spoofing(struct txgbe_hw *hw, bool enable, int vf); +void txgbe_set_ethertype_anti_spoofing(struct txgbe_hw *hw, + bool enable, int vf); +s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps); +void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom, + int strategy); +s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min, + u8 build, u8 ver); +s32 txgbe_reset_hostif(struct txgbe_hw *hw); +u8 txgbe_calculate_checksum(u8 *buffer, u32 length); +s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer, + u32 length, u32 timeout, bool return_data); + +void txgbe_clear_tx_pending(struct txgbe_hw *hw); +void txgbe_stop_mac_link_on_d3(struct txgbe_hw *hw); +bool txgbe_mng_present(struct txgbe_hw *hw); +bool txgbe_check_mng_access(struct txgbe_hw *hw); + +s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw); +s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw); +void txgbe_enable_rx(struct txgbe_hw *hw); +void txgbe_disable_rx(struct txgbe_hw *hw); +s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete); +int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit); + +/* @txgbe_api.h */ +s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw); +s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl); +s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl, + bool cloud_mode); +s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw, + union txgbe_atr_hash_dword input, + union txgbe_atr_hash_dword common, + u8 queue); +s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw, + union txgbe_atr_input *input_mask, bool cloud_mode); +s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw, + union txgbe_atr_input *input, + u16 soft_id, u8 queue, bool cloud_mode); +s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw, + union txgbe_atr_input *input, + u16 soft_id); +s32 txgbe_fdir_add_perfect_filter(struct txgbe_hw *hw, + union txgbe_atr_input *input, + union txgbe_atr_input *mask, + u16 soft_id, + u8 queue, + bool cloud_mode); +void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input, + union txgbe_atr_input *mask); +u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input, + union txgbe_atr_hash_dword common); + +s32 txgbe_get_link_capabilities(struct txgbe_hw *hw, + u32 *speed, bool *autoneg); +enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw); +void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw); +void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw); +void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw); +void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw, + u32 speed); +s32 txgbe_setup_mac_link(struct txgbe_hw *hw, u32 speed, + bool autoneg_wait_to_complete); +void txgbe_init_mac_link_ops(struct txgbe_hw *hw); +s32 txgbe_reset_hw(struct txgbe_hw *hw); +s32 txgbe_identify_phy(struct txgbe_hw *hw); +s32 txgbe_init_phy_ops(struct txgbe_hw *hw); +s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval); +s32 txgbe_init_ops(struct txgbe_hw *hw); +s32 txgbe_setup_eee(struct txgbe_hw *hw, bool enable_eee); + +s32 txgbe_init_flash_params(struct txgbe_hw *hw); +s32 txgbe_read_flash_buffer(struct txgbe_hw *hw, u32 offset, + u32 dwords, u32 *data); +s32 txgbe_write_flash_buffer(struct txgbe_hw *hw, u32 offset, + u32 dwords, u32 *data); + +s32 txgbe_read_eeprom(struct txgbe_hw *hw, + u16 offset, u16 *data); +s32 txgbe_read_eeprom_buffer(struct txgbe_hw *hw, u16 offset, + u16 words, u16 *data); +s32 txgbe_init_eeprom_params(struct txgbe_hw *hw); +s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw); +s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw); +s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, + u16 *checksum_val); +s32 txgbe_update_flash(struct txgbe_hw *hw); +s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw, + u16 offset, u16 words, u16 *data); +s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset, + u16 data); +s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw, + u16 offset, u16 words, u16 *data); +s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset, u16 *data); +u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr); +void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data); +void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data); +u32 rd32_ephy(struct txgbe_hw *hw, u32 addr); + +s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region, + const u8 *data, u32 size); + +s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg); +s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg); + +s32 txgbe_set_link_to_kx(struct txgbe_hw *hw, + u32 speed, + bool autoneg); + + +#endif /* _TXGBE_HW_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c b/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c new file mode 100644 index 000000000000..bb402e45557e --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c @@ -0,0 +1,959 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_lib.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#include "txgbe.h" + +/** + * txgbe_cache_ring_dcb_vmdq - Descriptor ring to register mapping for VMDq + * @adapter: board private structure to initialize + * + * Cache the descriptor ring offsets for VMDq to the assigned rings. It + * will also try to cache the proper offsets if RSS/FCoE are enabled along + * with VMDq. + * + **/ +static bool txgbe_cache_ring_dcb_vmdq(struct txgbe_adapter *adapter) +{ + struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ]; + int i; + u16 reg_idx; + u8 tcs = netdev_get_num_tc(adapter->netdev); + + /* verify we have DCB enabled before proceeding */ + if (tcs <= 1) + return false; + + /* verify we have VMDq enabled before proceeding */ + if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) + return false; + + /* start at VMDq register offset for SR-IOV enabled setups */ + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= tcs) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->rx_ring[i]->reg_idx = reg_idx; + } + + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= tcs) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->tx_ring[i]->reg_idx = reg_idx; + } + + return true; +} + +/* txgbe_get_first_reg_idx - Return first register index associated with ring */ +static void txgbe_get_first_reg_idx(struct txgbe_adapter *adapter, u8 tc, + u16 *tx, u16 *rx) +{ + struct net_device *dev = adapter->netdev; + u8 num_tcs = netdev_get_num_tc(dev); + + *tx = 0; + *rx = 0; + + + if (num_tcs > 4) { + /* + * TCs : TC0/1 TC2/3 TC4-7 + * TxQs/TC: 32 16 8 + * RxQs/TC: 16 16 16 + */ + *rx = tc << 4; + if (tc < 3) + *tx = tc << 5; /* 0, 32, 64 */ + else if (tc < 5) + *tx = (tc + 2) << 4; /* 80, 96 */ + else + *tx = (tc + 8) << 3; /* 104, 112, 120 */ + } else { + /* + * TCs : TC0 TC1 TC2/3 + * TxQs/TC: 64 32 16 + * RxQs/TC: 32 32 32 + */ + *rx = tc << 5; + if (tc < 2) + *tx = tc << 6; /* 0, 64 */ + else + *tx = (tc + 4) << 4; /* 96, 112 */ + } + +} + +/** + * txgbe_cache_ring_dcb - Descriptor ring to register mapping for DCB + * @adapter: board private structure to initialize + * + * Cache the descriptor ring offsets for DCB to the assigned rings. + * + **/ +static bool txgbe_cache_ring_dcb(struct txgbe_adapter *adapter) +{ + int tc, offset, rss_i, i; + u16 tx_idx, rx_idx; + struct net_device *dev = adapter->netdev; + u8 num_tcs = netdev_get_num_tc(dev); + + if (num_tcs <= 1) + return false; + + rss_i = adapter->ring_feature[RING_F_RSS].indices; + + for (tc = 0, offset = 0; tc < num_tcs; tc++, offset += rss_i) { + txgbe_get_first_reg_idx(adapter, (u8)tc, &tx_idx, &rx_idx); + for (i = 0; i < rss_i; i++, tx_idx++, rx_idx++) { + adapter->tx_ring[offset + i]->reg_idx = tx_idx; + adapter->rx_ring[offset + i]->reg_idx = rx_idx; + adapter->tx_ring[offset + i]->dcb_tc = (u8)tc; + adapter->rx_ring[offset + i]->dcb_tc = (u8)tc; + } + } + + return true; +} + +/** + * txgbe_cache_ring_vmdq - Descriptor ring to register mapping for VMDq + * @adapter: board private structure to initialize + * + * Cache the descriptor ring offsets for VMDq to the assigned rings. It + * will also try to cache the proper offsets if RSS/FCoE/SRIOV are enabled along + * with VMDq. + * + **/ +static bool txgbe_cache_ring_vmdq(struct txgbe_adapter *adapter) +{ + struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ]; + struct txgbe_ring_feature *rss = &adapter->ring_feature[RING_F_RSS]; + int i; + u16 reg_idx; + + /* only proceed if VMDq is enabled */ + if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) + return false; + + /* start at VMDq register offset for SR-IOV enabled setups */ + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & ~vmdq->mask) >= rss->indices) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->rx_ring[i]->reg_idx = reg_idx; + } + + reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask); + for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) { + /* If we are greater than indices move to next pool */ + if ((reg_idx & rss->mask) >= rss->indices) + reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask); + adapter->tx_ring[i]->reg_idx = reg_idx; + } + + return true; +} + +/** + * txgbe_cache_ring_rss - Descriptor ring to register mapping for RSS + * @adapter: board private structure to initialize + * + * Cache the descriptor ring offsets for RSS, ATR, FCoE, and SR-IOV. + * + **/ +static bool txgbe_cache_ring_rss(struct txgbe_adapter *adapter) +{ + u16 i; + + for (i = 0; i < adapter->num_rx_queues; i++) + adapter->rx_ring[i]->reg_idx = i; + + for (i = 0; i < adapter->num_tx_queues; i++) + adapter->tx_ring[i]->reg_idx = i; + + return true; +} + +/** + * txgbe_cache_ring_register - Descriptor ring to register mapping + * @adapter: board private structure to initialize + * + * Once we know the feature-set enabled for the device, we'll cache + * the register offset the descriptor ring is assigned to. + * + * Note, the order the various feature calls is important. It must start with + * the "most" features enabled at the same time, then trickle down to the + * least amount of features turned on at once. + **/ +static void txgbe_cache_ring_register(struct txgbe_adapter *adapter) +{ + if (txgbe_cache_ring_dcb_vmdq(adapter)) + return; + + if (txgbe_cache_ring_dcb(adapter)) + return; + + if (txgbe_cache_ring_vmdq(adapter)) + return; + + txgbe_cache_ring_rss(adapter); +} + +#define TXGBE_RSS_64Q_MASK 0x3F +#define TXGBE_RSS_16Q_MASK 0xF +#define TXGBE_RSS_8Q_MASK 0x7 +#define TXGBE_RSS_4Q_MASK 0x3 +#define TXGBE_RSS_2Q_MASK 0x1 +#define TXGBE_RSS_DISABLED_MASK 0x0 + +/** + * txgbe_set_dcb_vmdq_queues: Allocate queues for VMDq devices w/ DCB + * @adapter: board private structure to initialize + * + * When VMDq (Virtual Machine Devices queue) is enabled, allocate queues + * and VM pools where appropriate. Also assign queues based on DCB + * priorities and map accordingly.. + * + **/ +static bool txgbe_set_dcb_vmdq_queues(struct txgbe_adapter *adapter) +{ + u16 i; + u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit; + u16 vmdq_m = 0; + u8 tcs = netdev_get_num_tc(adapter->netdev); + + /* verify we have DCB enabled before proceeding */ + if (tcs <= 1) + return false; + + /* verify we have VMDq enabled before proceeding */ + if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) + return false; + + /* Add starting offset to total pool count */ + vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset; + + /* 16 pools w/ 8 TC per pool */ + if (tcs > 4) { + vmdq_i = min_t(u16, vmdq_i, 16); + vmdq_m = TXGBE_VMDQ_8Q_MASK; + /* 32 pools w/ 4 TC per pool */ + } else { + vmdq_i = min_t(u16, vmdq_i, 32); + vmdq_m = TXGBE_VMDQ_4Q_MASK; + } + + /* remove the starting offset from the pool count */ + vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset; + + /* save features for later use */ + adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i; + adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m; + + /* + * We do not support DCB, VMDq, and RSS all simultaneously + * so we will disable RSS since it is the lowest priority + */ + adapter->ring_feature[RING_F_RSS].indices = 1; + adapter->ring_feature[RING_F_RSS].mask = TXGBE_RSS_DISABLED_MASK; + + adapter->queues_per_pool = tcs; + + adapter->num_tx_queues = vmdq_i * tcs; + adapter->num_rx_queues = vmdq_i * tcs; + + /* disable ATR as it is not supported when VMDq is enabled */ + adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE; + + /* configure TC to queue mapping */ + for (i = 0; i < tcs; i++) + netdev_set_tc_queue(adapter->netdev, (u8)i, 1, i); + + return true; +} + +/** + * txgbe_set_dcb_queues: Allocate queues for a DCB-enabled device + * @adapter: board private structure to initialize + * + * When DCB (Data Center Bridging) is enabled, allocate queues for + * each traffic class. If multiqueue isn't available,then abort DCB + * initialization. + * + * This function handles all combinations of DCB, RSS, and FCoE. + * + **/ +static bool txgbe_set_dcb_queues(struct txgbe_adapter *adapter) +{ + struct net_device *dev = adapter->netdev; + struct txgbe_ring_feature *f; + u16 rss_i, rss_m, i; + u16 tcs; + + /* Map queue offset and counts onto allocated tx queues */ + tcs = netdev_get_num_tc(dev); + + if (tcs <= 1) + return false; + + /* determine the upper limit for our current DCB mode */ + rss_i = dev->num_tx_queues / tcs; + + if (tcs > 4) { + /* 8 TC w/ 8 queues per TC */ + rss_i = min_t(u16, rss_i, 8); + rss_m = TXGBE_RSS_8Q_MASK; + } else { + /* 4 TC w/ 16 queues per TC */ + rss_i = min_t(u16, rss_i, 16); + rss_m = TXGBE_RSS_16Q_MASK; + } + + /* set RSS mask and indices */ + f = &adapter->ring_feature[RING_F_RSS]; + rss_i = min_t(u16, rss_i, f->limit); + f->indices = rss_i; + f->mask = rss_m; + + /* disable ATR as it is not supported when DCB is enabled */ + adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE; + + for (i = 0; i < tcs; i++) + netdev_set_tc_queue(dev, (u8)i, rss_i, rss_i * i); + + adapter->num_tx_queues = rss_i * tcs; + adapter->num_rx_queues = rss_i * tcs; + + return true; +} + +/** + * txgbe_set_vmdq_queues: Allocate queues for VMDq devices + * @adapter: board private structure to initialize + * + * When VMDq (Virtual Machine Devices queue) is enabled, allocate queues + * and VM pools where appropriate. If RSS is available, then also try and + * enable RSS and map accordingly. + * + **/ +static bool txgbe_set_vmdq_queues(struct txgbe_adapter *adapter) +{ + u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit; + u16 vmdq_m = 0; + u16 rss_i = adapter->ring_feature[RING_F_RSS].limit; + u16 rss_m = TXGBE_RSS_DISABLED_MASK; + + /* only proceed if VMDq is enabled */ + if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) + return false; + /* Add starting offset to total pool count */ + vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset; + + /* double check we are limited to maximum pools */ + vmdq_i = min_t(u16, TXGBE_MAX_VMDQ_INDICES, vmdq_i); + + /* 64 pool mode with 2 queues per pool, or + * 16/32/64 pool mode with 1 queue per pool */ + if ((vmdq_i > 32) || (rss_i < 4)) { + vmdq_m = TXGBE_VMDQ_2Q_MASK; + rss_m = TXGBE_RSS_2Q_MASK; + rss_i = min_t(u16, rss_i, 2); + /* 32 pool mode with 4 queues per pool */ + } else { + vmdq_m = TXGBE_VMDQ_4Q_MASK; + rss_m = TXGBE_RSS_4Q_MASK; + rss_i = 4; + } + + /* remove the starting offset from the pool count */ + vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset; + + /* save features for later use */ + adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i; + adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m; + + /* limit RSS based on user input and save for later use */ + adapter->ring_feature[RING_F_RSS].indices = rss_i; + adapter->ring_feature[RING_F_RSS].mask = rss_m; + + adapter->queues_per_pool = rss_i; + + adapter->num_rx_queues = vmdq_i * rss_i; + adapter->num_tx_queues = vmdq_i * rss_i; + + /* disable ATR as it is not supported when VMDq is enabled */ + adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE; + + return true; +} + +/** + * txgbe_set_rss_queues: Allocate queues for RSS + * @adapter: board private structure to initialize + * + * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try + * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU. + * + **/ +static bool txgbe_set_rss_queues(struct txgbe_adapter *adapter) +{ + struct txgbe_ring_feature *f; + u16 rss_i; + + /* set mask for 16 queue limit of RSS */ + f = &adapter->ring_feature[RING_F_RSS]; + rss_i = f->limit; + + f->indices = rss_i; + f->mask = TXGBE_RSS_64Q_MASK; + + /* disable ATR by default, it will be configured below */ + adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE; + + /* + * Use Flow Director in addition to RSS to ensure the best + * distribution of flows across cores, even when an FDIR flow + * isn't matched. + */ + if (rss_i > 1 && adapter->atr_sample_rate) { + f = &adapter->ring_feature[RING_F_FDIR]; + + rss_i = f->indices = f->limit; + + if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)) + adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE; + } + + adapter->num_rx_queues = rss_i; + adapter->num_tx_queues = rss_i; + + return true; +} + +/* + * txgbe_set_num_queues: Allocate queues for device, feature dependent + * @adapter: board private structure to initialize + * + * This is the top level queue allocation routine. The order here is very + * important, starting with the "most" number of features turned on at once, + * and ending with the smallest set of features. This way large combinations + * can be allocated if they're turned on, and smaller combinations are the + * fallthrough conditions. + * + **/ +static void txgbe_set_num_queues(struct txgbe_adapter *adapter) +{ + /* Start with base case */ + adapter->num_rx_queues = 1; + adapter->num_tx_queues = 1; + adapter->queues_per_pool = 1; + + if (txgbe_set_dcb_vmdq_queues(adapter)) + return; + + if (txgbe_set_dcb_queues(adapter)) + return; + + if (txgbe_set_vmdq_queues(adapter)) + return; + + txgbe_set_rss_queues(adapter); +} + +/** + * txgbe_acquire_msix_vectors - acquire MSI-X vectors + * @adapter: board private structure + * + * Attempts to acquire a suitable range of MSI-X vector interrupts. Will + * return a negative error code if unable to acquire MSI-X vectors for any + * reason. + */ +static int txgbe_acquire_msix_vectors(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i, vectors, vector_threshold; + + if (!(adapter->flags & TXGBE_FLAG_MSIX_CAPABLE)) + return -EOPNOTSUPP; + + /* We start by asking for one vector per queue pair */ + vectors = max(adapter->num_rx_queues, adapter->num_tx_queues); + + /* It is easy to be greedy for MSI-X vectors. However, it really + * doesn't do much good if we have a lot more vectors than CPUs. We'll + * be somewhat conservative and only ask for (roughly) the same number + * of vectors as there are CPUs. + */ + vectors = min_t(int, vectors, num_online_cpus()); + + /* Some vectors are necessary for non-queue interrupts */ + vectors += NON_Q_VECTORS; + + /* Hardware can only support a maximum of hw.mac->max_msix_vectors. + * With features such as RSS and VMDq, we can easily surpass the + * number of Rx and Tx descriptor queues supported by our device. + * Thus, we cap the maximum in the rare cases where the CPU count also + * exceeds our vector limit + */ + vectors = min_t(int, vectors, hw->mac.max_msix_vectors); + + /* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0] + * handler, and (2) an Other (Link Status Change, etc.) handler. + */ + vector_threshold = MIN_MSIX_COUNT; + + adapter->msix_entries = kcalloc(vectors, + sizeof(struct msix_entry), + GFP_KERNEL); + if (!adapter->msix_entries) + return -ENOMEM; + + for (i = 0; i < vectors; i++) + adapter->msix_entries[i].entry = i; + + vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries, + vector_threshold, vectors); + if (vectors < 0) { + /* A negative count of allocated vectors indicates an error in + * acquiring within the specified range of MSI-X vectors */ + e_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n", + vectors); + + adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED; + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + + return vectors; + } + + /* we successfully allocated some number of vectors within our + * requested range. + */ + adapter->flags |= TXGBE_FLAG_MSIX_ENABLED; + + /* Adjust for only the vectors we'll use, which is minimum + * of max_q_vectors, or the number of vectors we were allocated. + */ + vectors -= NON_Q_VECTORS; + adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors); + + return 0; +} + +static void txgbe_add_ring(struct txgbe_ring *ring, + struct txgbe_ring_container *head) +{ + ring->next = head->ring; + head->ring = ring; + head->count++; +} + +/** + * txgbe_alloc_q_vector - Allocate memory for a single interrupt vector + * @adapter: board private structure to initialize + * @v_count: q_vectors allocated on adapter, used for ring interleaving + * @v_idx: index of vector in adapter struct + * @txr_count: total number of Tx rings to allocate + * @txr_idx: index of first Tx ring to allocate + * @rxr_count: total number of Rx rings to allocate + * @rxr_idx: index of first Rx ring to allocate + * + * We allocate one q_vector. If allocation fails we return -ENOMEM. + **/ +static int txgbe_alloc_q_vector(struct txgbe_adapter *adapter, + unsigned int v_count, unsigned int v_idx, + unsigned int txr_count, unsigned int txr_idx, + unsigned int rxr_count, unsigned int rxr_idx) +{ + struct txgbe_q_vector *q_vector; + struct txgbe_ring *ring; + int node = -1; + int cpu = -1; + u8 tcs = netdev_get_num_tc(adapter->netdev); + int ring_count, size; + + /* note this will allocate space for the ring structure as well! */ + ring_count = txr_count + rxr_count; + size = sizeof(struct txgbe_q_vector) + + (sizeof(struct txgbe_ring) * ring_count); + + /* customize cpu for Flow Director mapping */ + if ((tcs <= 1) && !(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) { + u16 rss_i = adapter->ring_feature[RING_F_RSS].indices; + if (rss_i > 1 && adapter->atr_sample_rate) { + if (cpu_online(v_idx)) { + cpu = v_idx; + node = cpu_to_node(cpu); + } + } + } + + /* allocate q_vector and rings */ + q_vector = kzalloc_node(size, GFP_KERNEL, node); + if (!q_vector) + q_vector = kzalloc(size, GFP_KERNEL); + if (!q_vector) + return -ENOMEM; + + /* setup affinity mask and node */ + if (cpu != -1) + cpumask_set_cpu(cpu, &q_vector->affinity_mask); + q_vector->numa_node = node; + + /* initialize CPU for DCA */ + q_vector->cpu = -1; + + /* initialize NAPI */ + netif_napi_add(adapter->netdev, &q_vector->napi, + txgbe_poll, 64); + + /* tie q_vector and adapter together */ + adapter->q_vector[v_idx] = q_vector; + q_vector->adapter = adapter; + q_vector->v_idx = v_idx; + + /* initialize work limits */ + q_vector->tx.work_limit = adapter->tx_work_limit; + q_vector->rx.work_limit = adapter->rx_work_limit; + + /* initialize pointer to rings */ + ring = q_vector->ring; + + /* intialize ITR */ + if (txr_count && !rxr_count) { + /* tx only vector */ + if (adapter->tx_itr_setting == 1) + q_vector->itr = TXGBE_12K_ITR; + else + q_vector->itr = adapter->tx_itr_setting; + } else { + /* rx or rx/tx vector */ + if (adapter->rx_itr_setting == 1) + q_vector->itr = TXGBE_20K_ITR; + else + q_vector->itr = adapter->rx_itr_setting; + } + + while (txr_count) { + /* assign generic ring traits */ + ring->dev = pci_dev_to_dev(adapter->pdev); + ring->netdev = adapter->netdev; + + /* configure backlink on ring */ + ring->q_vector = q_vector; + + /* update q_vector Tx values */ + txgbe_add_ring(ring, &q_vector->tx); + + /* apply Tx specific ring traits */ + ring->count = adapter->tx_ring_count; + if (adapter->num_vmdqs > 1) + ring->queue_index = + txr_idx % adapter->queues_per_pool; + else + ring->queue_index = txr_idx; + + /* assign ring to adapter */ + adapter->tx_ring[txr_idx] = ring; + + /* update count and index */ + txr_count--; + txr_idx += v_count; + + /* push pointer to next ring */ + ring++; + } + + while (rxr_count) { + /* assign generic ring traits */ + ring->dev = pci_dev_to_dev(adapter->pdev); + ring->netdev = adapter->netdev; + + /* configure backlink on ring */ + ring->q_vector = q_vector; + + /* update q_vector Rx values */ + txgbe_add_ring(ring, &q_vector->rx); + + /* apply Rx specific ring traits */ + ring->count = adapter->rx_ring_count; + if (adapter->num_vmdqs > 1) + ring->queue_index = + rxr_idx % adapter->queues_per_pool; + else + ring->queue_index = rxr_idx; + + /* assign ring to adapter */ + adapter->rx_ring[rxr_idx] = ring; + + /* update count and index */ + rxr_count--; + rxr_idx += v_count; + + /* push pointer to next ring */ + ring++; + } + + return 0; +} + +/** + * txgbe_free_q_vector - Free memory allocated for specific interrupt vector + * @adapter: board private structure to initialize + * @v_idx: Index of vector to be freed + * + * This function frees the memory allocated to the q_vector. In addition if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void txgbe_free_q_vector(struct txgbe_adapter *adapter, int v_idx) +{ + struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx]; + struct txgbe_ring *ring; + + txgbe_for_each_ring(ring, q_vector->tx) + adapter->tx_ring[ring->queue_index] = NULL; + + txgbe_for_each_ring(ring, q_vector->rx) + adapter->rx_ring[ring->queue_index] = NULL; + + adapter->q_vector[v_idx] = NULL; + netif_napi_del(&q_vector->napi); + kfree_rcu(q_vector, rcu); +} + +/** + * txgbe_alloc_q_vectors - Allocate memory for interrupt vectors + * @adapter: board private structure to initialize + * + * We allocate one q_vector per queue interrupt. If allocation fails we + * return -ENOMEM. + **/ +static int txgbe_alloc_q_vectors(struct txgbe_adapter *adapter) +{ + unsigned int q_vectors = adapter->num_q_vectors; + unsigned int rxr_remaining = adapter->num_rx_queues; + unsigned int txr_remaining = adapter->num_tx_queues; + unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0; + int err; + + if (q_vectors >= (rxr_remaining + txr_remaining)) { + for (; rxr_remaining; v_idx++) { + err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx, + 0, 0, 1, rxr_idx); + if (err) + goto err_out; + + /* update counts and index */ + rxr_remaining--; + rxr_idx++; + } + } + + for (; v_idx < q_vectors; v_idx++) { + int rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx); + int tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx); + err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx, + tqpv, txr_idx, + rqpv, rxr_idx); + + if (err) + goto err_out; + + /* update counts and index */ + rxr_remaining -= rqpv; + txr_remaining -= tqpv; + rxr_idx++; + txr_idx++; + } + + return 0; + +err_out: + adapter->num_tx_queues = 0; + adapter->num_rx_queues = 0; + adapter->num_q_vectors = 0; + + while (v_idx--) + txgbe_free_q_vector(adapter, v_idx); + + return -ENOMEM; +} + +/** + * txgbe_free_q_vectors - Free memory allocated for interrupt vectors + * @adapter: board private structure to initialize + * + * This function frees the memory allocated to the q_vectors. In addition if + * NAPI is enabled it will delete any references to the NAPI struct prior + * to freeing the q_vector. + **/ +static void txgbe_free_q_vectors(struct txgbe_adapter *adapter) +{ + int v_idx = adapter->num_q_vectors; + + adapter->num_tx_queues = 0; + adapter->num_rx_queues = 0; + adapter->num_q_vectors = 0; + + while (v_idx--) + txgbe_free_q_vector(adapter, v_idx); +} + +void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter) +{ + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) { + adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED; + pci_disable_msix(adapter->pdev); + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + } else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) { + adapter->flags &= ~TXGBE_FLAG_MSI_ENABLED; + pci_disable_msi(adapter->pdev); + } +} + +/** + * txgbe_set_interrupt_capability - set MSI-X or MSI if supported + * @adapter: board private structure to initialize + * + * Attempt to configure the interrupts using the best available + * capabilities of the hardware and the kernel. + **/ +void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter) +{ + int err; + + /* We will try to get MSI-X interrupts first */ + if (!txgbe_acquire_msix_vectors(adapter)) + return; + + /* At this point, we do not have MSI-X capabilities. We need to + * reconfigure or disable various features which require MSI-X + * capability. + */ + + /* Disable DCB unless we only have a single traffic class */ + if (netdev_get_num_tc(adapter->netdev) > 1) { + e_dev_warn("Number of DCB TCs exceeds number of available " + "queues. Disabling DCB support.\n"); + netdev_reset_tc(adapter->netdev); + } + + /* Disable VMDq support */ + e_dev_warn("Disabling VMQd support\n"); + adapter->flags &= ~TXGBE_FLAG_VMDQ_ENABLED; + + /* Disable RSS */ + e_dev_warn("Disabling RSS support\n"); + adapter->ring_feature[RING_F_RSS].limit = 1; + + /* recalculate number of queues now that many features have been + * changed or disabled. + */ + txgbe_set_num_queues(adapter); + adapter->num_q_vectors = 1; + + if (!(adapter->flags & TXGBE_FLAG_MSI_CAPABLE)) + return; + + err = pci_enable_msi(adapter->pdev); + if (err) + e_dev_warn("Failed to allocate MSI interrupt, falling back to " + "legacy. Error: %d\n", + err); + else + adapter->flags |= TXGBE_FLAG_MSI_ENABLED; +} + +/** + * txgbe_init_interrupt_scheme - Determine proper interrupt scheme + * @adapter: board private structure to initialize + * + * We determine which interrupt scheme to use based on... + * - Kernel support (MSI, MSI-X) + * - which can be user-defined (via MODULE_PARAM) + * - Hardware queue count (num_*_queues) + * - defined by miscellaneous hardware support/features (RSS, etc.) + **/ +int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter) +{ + int err; + + /* Number of supported queues */ + txgbe_set_num_queues(adapter); + + /* Set interrupt mode */ + txgbe_set_interrupt_capability(adapter); + + /* Allocate memory for queues */ + err = txgbe_alloc_q_vectors(adapter); + if (err) { + e_err(probe, "Unable to allocate memory for queue vectors\n"); + txgbe_reset_interrupt_capability(adapter); + return err; + } + + txgbe_cache_ring_register(adapter); + + set_bit(__TXGBE_DOWN, &adapter->state); + + return 0; +} + +/** + * txgbe_clear_interrupt_scheme - Clear the current interrupt scheme settings + * @adapter: board private structure to clear interrupt scheme on + * + * We go through and clear interrupt specific resources and reset the structure + * to pre-load conditions + **/ +void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter) +{ + txgbe_free_q_vectors(adapter); + txgbe_reset_interrupt_capability(adapter); +} + +void txgbe_tx_ctxtdesc(struct txgbe_ring *tx_ring, u32 vlan_macip_lens, + u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx) +{ + struct txgbe_tx_context_desc *context_desc; + u16 i = tx_ring->next_to_use; + + context_desc = TXGBE_TX_CTXTDESC(tx_ring, i); + + i++; + tx_ring->next_to_use = (i < tx_ring->count) ? i : 0; + + /* set bits to identify this as an advanced context descriptor */ + type_tucmd |= TXGBE_TXD_DTYP_CTXT; + context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens); + context_desc->seqnum_seed = cpu_to_le32(fcoe_sof_eof); + context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd); + context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx); +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_main.c b/drivers/net/ethernet/netswift/txgbe/txgbe_main.c new file mode 100644 index 000000000000..a4d8cc260134 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_main.c @@ -0,0 +1,8045 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_main.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + * + * Copyright (c)2006 - 2007 Myricom, Inc. for some LRO specific code + */ + +#include <linux/types.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/netdevice.h> +#include <linux/vmalloc.h> +#include <linux/highmem.h> +#include <linux/string.h> +#include <linux/in.h> +#include <linux/ip.h> +#include <linux/tcp.h> +#include <linux/pkt_sched.h> +#include <linux/ipv6.h> +#include <net/checksum.h> +#include <net/ip6_checksum.h> +#include <linux/if_macvlan.h> +#include <linux/ethtool.h> +#include <linux/if_bridge.h> +#include <net/vxlan.h> + +#include "txgbe.h" +#include "txgbe_hw.h" +#include "txgbe_phy.h" +#include "txgbe_bp.h" + +char txgbe_driver_name[32] = TXGBE_NAME; +static const char txgbe_driver_string[] = + "WangXun 10 Gigabit PCI Express Network Driver"; + +#define DRV_HW_PERF + +#define FPGA + +#define DRIVERIOV + +#define BYPASS_TAG + +#define RELEASE_TAG + +#define DRV_VERSION __stringify(1.1.17oe) + +const char txgbe_driver_version[32] = DRV_VERSION; +static const char txgbe_copyright[] = + "Copyright (c) 2015 -2017 Beijing WangXun Technology Co., Ltd"; +static const char txgbe_overheat_msg[] = + "Network adapter has been stopped because it has over heated. " + "If the problem persists, restart the computer, or " + "power off the system and replace the adapter"; +static const char txgbe_underheat_msg[] = + "Network adapter has been started again since the temperature " + "has been back to normal state"; + +/* txgbe_pci_tbl - PCI Device ID Table + * + * Wildcard entries (PCI_ANY_ID) should come last + * Last entry must be all 0s + * + * { Vendor ID, Device ID, SubVendor ID, SubDevice ID, + * Class, Class Mask, private data (not used) } + */ +static const struct pci_device_id txgbe_pci_tbl[] = { + { PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_SP1000), 0}, + { PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_WX1820), 0}, + /* required last entry */ + { .device = 0 } +}; +MODULE_DEVICE_TABLE(pci, txgbe_pci_tbl); + +MODULE_AUTHOR("Beijing WangXun Technology Co., Ltd, linux.nic@trustnetic.com"); +MODULE_DESCRIPTION("WangXun(R) 10 Gigabit PCI Express Network Driver"); +MODULE_LICENSE("GPL"); +MODULE_VERSION(DRV_VERSION); + +#define DEFAULT_DEBUG_LEVEL_SHIFT 3 + +static struct workqueue_struct *txgbe_wq; + +static bool txgbe_is_sfp(struct txgbe_hw *hw); +static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev); +static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring); +static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring); +static void txgbe_napi_enable_all(struct txgbe_adapter *adapter); +static void txgbe_napi_disable_all(struct txgbe_adapter *adapter); + +extern txgbe_dptype txgbe_ptype_lookup[256]; + +static inline txgbe_dptype txgbe_decode_ptype(const u8 ptype) +{ + return txgbe_ptype_lookup[ptype]; +} + +static inline txgbe_dptype +decode_rx_desc_ptype(const union txgbe_rx_desc *rx_desc) +{ + return txgbe_decode_ptype(TXGBE_RXD_PKTTYPE(rx_desc)); +} + +static void txgbe_check_minimum_link(struct txgbe_adapter *adapter, + int expected_gts) +{ + struct txgbe_hw *hw = &adapter->hw; + struct pci_dev *pdev; + + /* Some devices are not connected over PCIe and thus do not negotiate + * speed. These devices do not have valid bus info, and thus any report + * we generate may not be correct. + */ + if (hw->bus.type == txgbe_bus_type_internal) + return; + + pdev = adapter->pdev; + pcie_print_link_status(pdev); +} + +/** + * txgbe_enumerate_functions - Get the number of ports this device has + * @adapter: adapter structure + * + * This function enumerates the phsyical functions co-located on a single slot, + * in order to determine how many ports a device has. This is most useful in + * determining the required GT/s of PCIe bandwidth necessary for optimal + * performance. + **/ +static inline int txgbe_enumerate_functions(struct txgbe_adapter *adapter) +{ + struct pci_dev *entry, *pdev = adapter->pdev; + int physfns = 0; + + list_for_each_entry(entry, &pdev->bus->devices, bus_list) { + /* When the devices on the bus don't all match our device ID, + * we can't reliably determine the correct number of + * functions. This can occur if a function has been direct + * attached to a virtual machine using VT-d, for example. In + * this case, simply return -1 to indicate this. + */ + if ((entry->vendor != pdev->vendor) || + (entry->device != pdev->device)) + return -1; + + physfns++; + } + + return physfns; +} + +void txgbe_service_event_schedule(struct txgbe_adapter *adapter) +{ + if (!test_bit(__TXGBE_DOWN, &adapter->state) && + !test_bit(__TXGBE_REMOVING, &adapter->state) && + !test_and_set_bit(__TXGBE_SERVICE_SCHED, &adapter->state)) + queue_work(txgbe_wq, &adapter->service_task); +} + +static void txgbe_service_event_complete(struct txgbe_adapter *adapter) +{ + BUG_ON(!test_bit(__TXGBE_SERVICE_SCHED, &adapter->state)); + + /* flush memory to make sure state is correct before next watchdog */ + smp_mb__before_atomic(); + clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state); +} + +static void txgbe_remove_adapter(struct txgbe_hw *hw) +{ + struct txgbe_adapter *adapter = hw->back; + + if (!hw->hw_addr) + return; + hw->hw_addr = NULL; + e_dev_err("Adapter removed\n"); + if (test_bit(__TXGBE_SERVICE_INITED, &adapter->state)) + txgbe_service_event_schedule(adapter); +} + +static void txgbe_check_remove(struct txgbe_hw *hw, u32 reg) +{ + u32 value; + + /* The following check not only optimizes a bit by not + * performing a read on the status register when the + * register just read was a status register read that + * returned TXGBE_FAILED_READ_REG. It also blocks any + * potential recursion. + */ + if (reg == TXGBE_CFG_PORT_ST) { + txgbe_remove_adapter(hw); + return; + } + value = rd32(hw, TXGBE_CFG_PORT_ST); + if (value == TXGBE_FAILED_READ_REG) + txgbe_remove_adapter(hw); +} + +static u32 txgbe_validate_register_read(struct txgbe_hw *hw, u32 reg, bool quiet) +{ + int i; + u32 value; + u8 __iomem *reg_addr; + struct txgbe_adapter *adapter = hw->back; + + reg_addr = READ_ONCE(hw->hw_addr); + if (TXGBE_REMOVED(reg_addr)) + return TXGBE_FAILED_READ_REG; + for (i = 0; i < TXGBE_DEAD_READ_RETRIES; ++i) { + value = txgbe_rd32(reg_addr + reg); + if (value != TXGBE_DEAD_READ_REG) + break; + } + if (quiet) + return value; + if (value == TXGBE_DEAD_READ_REG) + e_err(drv, "%s: register %x read unchanged\n", __func__, reg); + else + e_warn(hw, "%s: register %x read recovered after %d retries\n", + __func__, reg, i + 1); + return value; +} + +/** + * txgbe_read_reg - Read from device register + * @hw: hw specific details + * @reg: offset of register to read + * + * Returns : value read or TXGBE_FAILED_READ_REG if removed + * + * This function is used to read device registers. It checks for device + * removal by confirming any read that returns all ones by checking the + * status register value for all ones. This function avoids reading from + * the hardware if a removal was previously detected in which case it + * returns TXGBE_FAILED_READ_REG (all ones). + */ +u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet) +{ + u32 value; + u8 __iomem *reg_addr; + + reg_addr = READ_ONCE(hw->hw_addr); + if (TXGBE_REMOVED(reg_addr)) + return TXGBE_FAILED_READ_REG; + value = txgbe_rd32(reg_addr + reg); + if (unlikely(value == TXGBE_FAILED_READ_REG)) + txgbe_check_remove(hw, reg); + if (unlikely(value == TXGBE_DEAD_READ_REG)) + value = txgbe_validate_register_read(hw, reg, quiet); + return value; +} + +static void txgbe_release_hw_control(struct txgbe_adapter *adapter) +{ + /* Let firmware take over control of hw */ + wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL, + TXGBE_CFG_PORT_CTL_DRV_LOAD, 0); +} + +static void txgbe_get_hw_control(struct txgbe_adapter *adapter) +{ + /* Let firmware know the driver has taken over */ + wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL, + TXGBE_CFG_PORT_CTL_DRV_LOAD, TXGBE_CFG_PORT_CTL_DRV_LOAD); +} + +/** + * txgbe_set_ivar - set the IVAR registers, mapping interrupt causes to vectors + * @adapter: pointer to adapter struct + * @direction: 0 for Rx, 1 for Tx, -1 for other causes + * @queue: queue to map the corresponding interrupt to + * @msix_vector: the vector to map to the corresponding queue + * + **/ +static void txgbe_set_ivar(struct txgbe_adapter *adapter, s8 direction, + u16 queue, u16 msix_vector) +{ + u32 ivar, index; + struct txgbe_hw *hw = &adapter->hw; + + if (direction == -1) { + /* other causes */ + msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL; + index = 0; + ivar = rd32(&adapter->hw, TXGBE_PX_MISC_IVAR); + ivar &= ~(0xFF << index); + ivar |= (msix_vector << index); + wr32(&adapter->hw, TXGBE_PX_MISC_IVAR, ivar); + } else { + /* tx or rx causes */ + msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL; + index = ((16 * (queue & 1)) + (8 * direction)); + ivar = rd32(hw, TXGBE_PX_IVAR(queue >> 1)); + ivar &= ~(0xFF << index); + ivar |= (msix_vector << index); + wr32(hw, TXGBE_PX_IVAR(queue >> 1), ivar); + } +} + +void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *ring, + struct txgbe_tx_buffer *tx_buffer) +{ + if (tx_buffer->skb) { + dev_kfree_skb_any(tx_buffer->skb); + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_single(ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } else if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_page(ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + } + tx_buffer->next_to_watch = NULL; + tx_buffer->skb = NULL; + dma_unmap_len_set(tx_buffer, len, 0); + /* tx_buffer must be completely set up in the transmit path */ +} + +static void txgbe_update_xoff_received(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_hw_stats *hwstats = &adapter->stats; + u32 xoff[8] = {0}; + int tc; + int i; + + /* update stats for each tc, only valid with PFC enabled */ + for (i = 0; i < MAX_TX_PACKET_BUFFERS; i++) { + u32 pxoffrxc; + wr32m(hw, TXGBE_MMC_CONTROL, TXGBE_MMC_CONTROL_UP, i<<16); + pxoffrxc = rd32(hw, TXGBE_MAC_PXOFFRXC); + hwstats->pxoffrxc[i] += pxoffrxc; + /* Get the TC for given UP */ + tc = netdev_get_prio_tc_map(adapter->netdev, i); + xoff[tc] += pxoffrxc; + } + + /* disarm tx queues that have received xoff frames */ + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *tx_ring = adapter->tx_ring[i]; + + tc = tx_ring->dcb_tc; + if ((tc <= 7) && (xoff[tc])) + clear_bit(__TXGBE_HANG_CHECK_ARMED, &tx_ring->state); + } +} + +static u64 txgbe_get_tx_completed(struct txgbe_ring *ring) +{ + return ring->stats.packets; +} + +static u64 txgbe_get_tx_pending(struct txgbe_ring *ring) +{ + struct txgbe_adapter *adapter; + struct txgbe_hw *hw; + u32 head, tail; + + if (ring->accel) + adapter = ring->accel->adapter; + else + adapter = ring->q_vector->adapter; + + hw = &adapter->hw; + head = rd32(hw, TXGBE_PX_TR_RP(ring->reg_idx)); + tail = rd32(hw, TXGBE_PX_TR_WP(ring->reg_idx)); + + return ((head <= tail) ? tail : tail + ring->count) - head; +} + +static inline bool txgbe_check_tx_hang(struct txgbe_ring *tx_ring) +{ + u64 tx_done = txgbe_get_tx_completed(tx_ring); + u64 tx_done_old = tx_ring->tx_stats.tx_done_old; + u64 tx_pending = txgbe_get_tx_pending(tx_ring); + + clear_check_for_tx_hang(tx_ring); + + /* + * Check for a hung queue, but be thorough. This verifies + * that a transmit has been completed since the previous + * check AND there is at least one packet pending. The + * ARMED bit is set to indicate a potential hang. The + * bit is cleared if a pause frame is received to remove + * false hang detection due to PFC or 802.3x frames. By + * requiring this to fail twice we avoid races with + * pfc clearing the ARMED bit and conditions where we + * run the check_tx_hang logic with a transmit completion + * pending but without time to complete it yet. + */ + if (tx_done_old == tx_done && tx_pending) + /* make sure it is true for two checks in a row */ + return test_and_set_bit(__TXGBE_HANG_CHECK_ARMED, + &tx_ring->state); + /* update completed stats and continue */ + tx_ring->tx_stats.tx_done_old = tx_done; + /* reset the countdown */ + clear_bit(__TXGBE_HANG_CHECK_ARMED, &tx_ring->state); + + return false; +} + +/** + * txgbe_tx_timeout - Respond to a Tx Hang + * @netdev: network interface device structure + **/ +static void txgbe_tx_timeout(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + bool real_tx_hang = false; + int i; + u16 value = 0; + u32 value2 = 0, value3 = 0; + u32 head, tail; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *tx_ring = adapter->tx_ring[i]; + if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring)) + real_tx_hang = true; + } + + pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value); + ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", value); + + pci_read_config_word(adapter->pdev, PCI_COMMAND, &value); + ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", value); + + for (i = 0; i < adapter->num_tx_queues; i++) { + head = rd32(&adapter->hw, TXGBE_PX_TR_RP(adapter->tx_ring[i]->reg_idx)); + tail = rd32(&adapter->hw, TXGBE_PX_TR_WP(adapter->tx_ring[i]->reg_idx)); + + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "tx ring %d next_to_use is %d, next_to_clean is %d\n", + i, adapter->tx_ring[i]->next_to_use, adapter->tx_ring[i]->next_to_clean); + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "tx ring %d hw rp is 0x%x, wp is 0x%x\n", i, head, tail); + } + + value2 = rd32(&adapter->hw, TXGBE_PX_IMS(0)); + value3 = rd32(&adapter->hw, TXGBE_PX_IMS(1)); + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "PX_IMS0 value is 0x%08x, PX_IMS1 value is 0x%08x\n", value2, value3); + + if (value2 || value3) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, "clear interrupt mask.\n"); + wr32(&adapter->hw, TXGBE_PX_ICS(0), value2); + wr32(&adapter->hw, TXGBE_PX_IMC(0), value2); + wr32(&adapter->hw, TXGBE_PX_ICS(1), value3); + wr32(&adapter->hw, TXGBE_PX_IMC(1), value3); + } + + if (adapter->hw.bus.lan_id == 0) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n"); + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + txgbe_service_event_schedule(adapter); + } else + wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1); +} + +#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2) + +/** + * txgbe_clean_tx_irq - Reclaim resources after transmit completes + * @q_vector: structure containing interrupt and ring information + * @tx_ring: tx ring to clean + **/ +static bool txgbe_clean_tx_irq(struct txgbe_q_vector *q_vector, + struct txgbe_ring *tx_ring) +{ + struct txgbe_adapter *adapter = q_vector->adapter; + struct txgbe_tx_buffer *tx_buffer; + union txgbe_tx_desc *tx_desc; + unsigned int total_bytes = 0, total_packets = 0; + unsigned int budget = q_vector->tx.work_limit; + unsigned int i = tx_ring->next_to_clean; + + if (test_bit(__TXGBE_DOWN, &adapter->state)) + return true; + + tx_buffer = &tx_ring->tx_buffer_info[i]; + tx_desc = TXGBE_TX_DESC(tx_ring, i); + i -= tx_ring->count; + + do { + union txgbe_tx_desc *eop_desc = tx_buffer->next_to_watch; + + /* if next_to_watch is not set then there is no work pending */ + if (!eop_desc) + break; + + /* prevent any other reads prior to eop_desc */ + read_barrier_depends(); + + /* if DD is not set pending work has not been completed */ + if (!(eop_desc->wb.status & cpu_to_le32(TXGBE_TXD_STAT_DD))) + break; + + /* clear next_to_watch to prevent false hangs */ + tx_buffer->next_to_watch = NULL; + + /* update the statistics for this packet */ + total_bytes += tx_buffer->bytecount; + total_packets += tx_buffer->gso_segs; + + /* free the skb */ + dev_consume_skb_any(tx_buffer->skb); + + /* unmap skb header data */ + dma_unmap_single(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + + /* clear tx_buffer data */ + tx_buffer->skb = NULL; + dma_unmap_len_set(tx_buffer, len, 0); + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -= tx_ring->count; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = TXGBE_TX_DESC(tx_ring, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buffer, len)) { + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + } + } + + /* move us one more past the eop_desc for start of next pkt */ + tx_buffer++; + tx_desc++; + i++; + if (unlikely(!i)) { + i -= tx_ring->count; + tx_buffer = tx_ring->tx_buffer_info; + tx_desc = TXGBE_TX_DESC(tx_ring, 0); + } + + /* issue prefetch for next Tx descriptor */ + prefetch(tx_desc); + + /* update budget accounting */ + budget--; + } while (likely(budget)); + + i += tx_ring->count; + tx_ring->next_to_clean = i; + u64_stats_update_begin(&tx_ring->syncp); + tx_ring->stats.bytes += total_bytes; + tx_ring->stats.packets += total_packets; + u64_stats_update_end(&tx_ring->syncp); + q_vector->tx.total_bytes += total_bytes; + q_vector->tx.total_packets += total_packets; + + if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring)) { + /* schedule immediate reset if we believe we hung */ + struct txgbe_hw *hw = &adapter->hw; + u16 value = 0; + + e_err(drv, "Detected Tx Unit Hang\n" + " Tx Queue <%d>\n" + " TDH, TDT <%x>, <%x>\n" + " next_to_use <%x>\n" + " next_to_clean <%x>\n" + "tx_buffer_info[next_to_clean]\n" + " time_stamp <%lx>\n" + " jiffies <%lx>\n", + tx_ring->queue_index, + rd32(hw, TXGBE_PX_TR_RP(tx_ring->reg_idx)), + rd32(hw, TXGBE_PX_TR_WP(tx_ring->reg_idx)), + tx_ring->next_to_use, i, + tx_ring->tx_buffer_info[i].time_stamp, jiffies); + + pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value); + if (value == TXGBE_FAILED_READ_CFG_WORD) { + e_info(hw, "pcie link has been lost.\n"); + } + + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + + e_info(probe, + "tx hang %d detected on queue %d, resetting adapter\n", + adapter->tx_timeout_count + 1, tx_ring->queue_index); + + /* schedule immediate reset if we believe we hung */ + e_info(hw, "real tx hang. do pcie recovery.\n"); + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + txgbe_service_event_schedule(adapter); + + /* the adapter is about to reset, no point in enabling stuff */ + return true; + } + + netdev_tx_completed_queue(txring_txq(tx_ring), + total_packets, total_bytes); + + if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && + (txgbe_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) { + /* Make sure that anybody stopping the queue after this + * sees the new next_to_clean. + */ + smp_mb(); + + if (__netif_subqueue_stopped(tx_ring->netdev, + tx_ring->queue_index) + && !test_bit(__TXGBE_DOWN, &adapter->state)) { + netif_wake_subqueue(tx_ring->netdev, + tx_ring->queue_index); + ++tx_ring->tx_stats.restart_queue; + } + } + + return !!budget; +} + +#define TXGBE_RSS_L4_TYPES_MASK \ + ((1ul << TXGBE_RXD_RSSTYPE_IPV4_TCP) | \ + (1ul << TXGBE_RXD_RSSTYPE_IPV4_UDP) | \ + (1ul << TXGBE_RXD_RSSTYPE_IPV4_SCTP) | \ + (1ul << TXGBE_RXD_RSSTYPE_IPV6_TCP) | \ + (1ul << TXGBE_RXD_RSSTYPE_IPV6_UDP) | \ + (1ul << TXGBE_RXD_RSSTYPE_IPV6_SCTP)) + +static inline void txgbe_rx_hash(struct txgbe_ring *ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + u16 rss_type; + + if (!(ring->netdev->features & NETIF_F_RXHASH)) + return; + + rss_type = le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) & + TXGBE_RXD_RSSTYPE_MASK; + + if (!rss_type) + return; + + skb_set_hash(skb, le32_to_cpu(rx_desc->wb.lower.hi_dword.rss), + (TXGBE_RSS_L4_TYPES_MASK & (1ul << rss_type)) ? + PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3); +} + +/** + * txgbe_rx_checksum - indicate in skb if hw indicated a good cksum + * @ring: structure containing ring specific data + * @rx_desc: current Rx descriptor being processed + * @skb: skb currently being received and modified + **/ +static inline void txgbe_rx_checksum(struct txgbe_ring *ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + txgbe_dptype dptype = decode_rx_desc_ptype(rx_desc); + + skb->ip_summed = CHECKSUM_NONE; + + skb_checksum_none_assert(skb); + + /* Rx csum disabled */ + if (!(ring->netdev->features & NETIF_F_RXCSUM)) + return; + + /* if IPv4 header checksum error */ + if ((txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_IPCS) && + txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_IPE)) || + (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_OUTERIPCS) && + txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_OUTERIPER))) { + ring->rx_stats.csum_err++; + return; + } + + /* L4 checksum offload flag must set for the below code to work */ + if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_L4CS)) + return; + + /*likely incorrect csum if IPv6 Dest Header found */ + if (dptype.prot != TXGBE_DEC_PTYPE_PROT_SCTP && TXGBE_RXD_IPV6EX(rx_desc)) + return; + + /* if L4 checksum error */ + if (txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_TCPE)) { + ring->rx_stats.csum_err++; + return; + } + /* If there is an outer header present that might contain a checksum + * we need to bump the checksum level by 1 to reflect the fact that + * we are indicating we validated the inner checksum. + */ + if (dptype.etype >= TXGBE_DEC_PTYPE_ETYPE_IG) { + skb->csum_level = 1; + /* FIXME :does skb->csum_level skb->encapsulation can both set ? */ + skb->encapsulation = 1; + } + + /* It must be a TCP or UDP or SCTP packet with a valid checksum */ + skb->ip_summed = CHECKSUM_UNNECESSARY; + ring->rx_stats.csum_good_cnt++; +} + +static bool txgbe_alloc_mapped_skb(struct txgbe_ring *rx_ring, + struct txgbe_rx_buffer *bi) +{ + struct sk_buff *skb = bi->skb; + dma_addr_t dma = bi->dma; + + if (unlikely(dma)) + return true; + + if (likely(!skb)) { + skb = netdev_alloc_skb_ip_align(rx_ring->netdev, + rx_ring->rx_buf_len); + if (unlikely(!skb)) { + rx_ring->rx_stats.alloc_rx_buff_failed++; + return false; + } + + bi->skb = skb; + + } + + dma = dma_map_single(rx_ring->dev, skb->data, + rx_ring->rx_buf_len, DMA_FROM_DEVICE); + + /* + * if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rx_ring->dev, dma)) { + dev_kfree_skb_any(skb); + bi->skb = NULL; + + rx_ring->rx_stats.alloc_rx_buff_failed++; + return false; + } + + bi->dma = dma; + return true; +} + +static bool txgbe_alloc_mapped_page(struct txgbe_ring *rx_ring, + struct txgbe_rx_buffer *bi) +{ + struct page *page = bi->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = dev_alloc_pages(txgbe_rx_pg_order(rx_ring)); + if (unlikely(!page)) { + rx_ring->rx_stats.alloc_rx_page_failed++; + return false; + } + + /* map page for use */ + dma = dma_map_page(rx_ring->dev, page, 0, + txgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE); + + /* + * if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rx_ring->dev, dma)) { + __free_pages(page, txgbe_rx_pg_order(rx_ring)); + + rx_ring->rx_stats.alloc_rx_page_failed++; + return false; + } + + bi->page_dma = dma; + bi->page = page; + bi->page_offset = 0; + + return true; +} + +/** + * txgbe_alloc_rx_buffers - Replace used receive buffers + * @rx_ring: ring to place buffers on + * @cleaned_count: number of buffers to replace + **/ +void txgbe_alloc_rx_buffers(struct txgbe_ring *rx_ring, u16 cleaned_count) +{ + union txgbe_rx_desc *rx_desc; + struct txgbe_rx_buffer *bi; + u16 i = rx_ring->next_to_use; + + /* nothing to do */ + if (!cleaned_count) + return; + + rx_desc = TXGBE_RX_DESC(rx_ring, i); + bi = &rx_ring->rx_buffer_info[i]; + i -= rx_ring->count; + + do { + if (ring_is_hs_enabled(rx_ring)) { + if (!txgbe_alloc_mapped_skb(rx_ring, bi)) + break; + rx_desc->read.hdr_addr = cpu_to_le64(bi->dma); + } + + if (!txgbe_alloc_mapped_page(rx_ring, bi)) + break; + rx_desc->read.pkt_addr = + cpu_to_le64(bi->page_dma + bi->page_offset); + + rx_desc++; + bi++; + i++; + if (unlikely(!i)) { + rx_desc = TXGBE_RX_DESC(rx_ring, 0); + bi = rx_ring->rx_buffer_info; + i -= rx_ring->count; + } + + /* clear the status bits for the next_to_use descriptor */ + rx_desc->wb.upper.status_error = 0; + + cleaned_count--; + } while (cleaned_count); + + i += rx_ring->count; + + if (rx_ring->next_to_use != i) { + rx_ring->next_to_use = i; + /* update next to alloc since we have filled the ring */ + rx_ring->next_to_alloc = i; + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel(i, rx_ring->tail); + } +} + +static inline u16 txgbe_get_hlen(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc) +{ + __le16 hdr_info = rx_desc->wb.lower.lo_dword.hs_rss.hdr_info; + u16 hlen = le16_to_cpu(hdr_info) & TXGBE_RXD_HDRBUFLEN_MASK; + + UNREFERENCED_PARAMETER(rx_ring); + + if (hlen > (TXGBE_RX_HDR_SIZE << TXGBE_RXD_HDRBUFLEN_SHIFT)) + hlen = 0; + else + hlen >>= TXGBE_RXD_HDRBUFLEN_SHIFT; + + return hlen; +} + +static void txgbe_set_rsc_gso_size(struct txgbe_ring __maybe_unused *ring, + struct sk_buff *skb) +{ + u16 hdr_len = eth_get_headlen(skb->data, skb_headlen(skb)); + + /* set gso_size to avoid messing up TCP MSS */ + skb_shinfo(skb)->gso_size = DIV_ROUND_UP((skb->len - hdr_len), + TXGBE_CB(skb)->append_cnt); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; +} + +static void txgbe_update_rsc_stats(struct txgbe_ring *rx_ring, + struct sk_buff *skb) +{ + /* if append_cnt is 0 then frame is not RSC */ + if (!TXGBE_CB(skb)->append_cnt) + return; + + rx_ring->rx_stats.rsc_count += TXGBE_CB(skb)->append_cnt; + rx_ring->rx_stats.rsc_flush++; + + txgbe_set_rsc_gso_size(rx_ring, skb); + + /* gso_size is computed using append_cnt so always clear it last */ + TXGBE_CB(skb)->append_cnt = 0; +} + +static void txgbe_rx_vlan(struct txgbe_ring *ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + u8 idx = 0; + u16 ethertype; + + if ((ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && + txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_VP)) { + idx = (le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) & + TXGBE_RXD_TPID_MASK) >> TXGBE_RXD_TPID_SHIFT; + ethertype = ring->q_vector->adapter->hw.tpid[idx]; + __vlan_hwaccel_put_tag(skb, + htons(ethertype), + le16_to_cpu(rx_desc->wb.upper.vlan)); + } +} + +/** + * txgbe_process_skb_fields - Populate skb header fields from Rx descriptor + * @rx_ring: rx descriptor ring packet is being transacted on + * @rx_desc: pointer to the EOP Rx descriptor + * @skb: pointer to current skb being populated + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, timestamp, protocol, and + * other fields within the skb. + **/ +static void txgbe_process_skb_fields(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + u32 flags = rx_ring->q_vector->adapter->flags; + + txgbe_update_rsc_stats(rx_ring, skb); + txgbe_rx_hash(rx_ring, rx_desc, skb); + txgbe_rx_checksum(rx_ring, rx_desc, skb); + + if (unlikely(flags & TXGBE_FLAG_RX_HWTSTAMP_ENABLED) && + unlikely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_TS))) { + txgbe_ptp_rx_hwtstamp(rx_ring->q_vector->adapter, skb); + rx_ring->last_rx_timestamp = jiffies; + } + + txgbe_rx_vlan(rx_ring, rx_desc, skb); + + skb_record_rx_queue(skb, rx_ring->queue_index); + + skb->protocol = eth_type_trans(skb, rx_ring->netdev); +} + +static void txgbe_rx_skb(struct txgbe_q_vector *q_vector, + struct sk_buff *skb) +{ + napi_gro_receive(&q_vector->napi, skb); +} + +/** + * txgbe_is_non_eop - process handling of non-EOP buffers + * @rx_ring: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * @skb: Current socket buffer containing buffer in progress + * + * This function updates next to clean. If the buffer is an EOP buffer + * this function exits returning false, otherwise it will place the + * sk_buff in the next buffer to be chained and return true indicating + * that this is in fact a non-EOP buffer. + **/ +static bool txgbe_is_non_eop(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + struct txgbe_rx_buffer *rx_buffer = + &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + u32 ntc = rx_ring->next_to_clean + 1; + + /* fetch, update, and store next to clean */ + ntc = (ntc < rx_ring->count) ? ntc : 0; + rx_ring->next_to_clean = ntc; + + prefetch(TXGBE_RX_DESC(rx_ring, ntc)); + + /* update RSC append count if present */ + if (ring_is_rsc_enabled(rx_ring)) { + __le32 rsc_enabled = rx_desc->wb.lower.lo_dword.data & + cpu_to_le32(TXGBE_RXD_RSCCNT_MASK); + + if (unlikely(rsc_enabled)) { + u32 rsc_cnt = le32_to_cpu(rsc_enabled); + + rsc_cnt >>= TXGBE_RXD_RSCCNT_SHIFT; + TXGBE_CB(skb)->append_cnt += rsc_cnt - 1; + + /* update ntc based on RSC value */ + ntc = le32_to_cpu(rx_desc->wb.upper.status_error); + ntc &= TXGBE_RXD_NEXTP_MASK; + ntc >>= TXGBE_RXD_NEXTP_SHIFT; + } + } + + /* if we are the last buffer then there is nothing else to do */ + if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP))) + return false; + + /* place skb in next buffer to be received */ + if (ring_is_hs_enabled(rx_ring)) { + rx_buffer->skb = rx_ring->rx_buffer_info[ntc].skb; + rx_buffer->dma = rx_ring->rx_buffer_info[ntc].dma; + rx_ring->rx_buffer_info[ntc].dma = 0; + } + rx_ring->rx_buffer_info[ntc].skb = skb; + rx_ring->rx_stats.non_eop_descs++; + + return true; +} + +/** + * txgbe_pull_tail - txgbe specific version of skb_pull_tail + * @skb: pointer to current skb being adjusted + * + * This function is an txgbe specific version of __pskb_pull_tail. The + * main difference between this version and the original function is that + * this function can make several assumptions about the state of things + * that allow for significant optimizations versus the standard function. + * As a result we can do things like drop a frag and maintain an accurate + * truesize for the skb. + */ +static void txgbe_pull_tail(struct sk_buff *skb) +{ + skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + unsigned char *va; + unsigned int pull_len; + + /* + * it is valid to use page_address instead of kmap since we are + * working with pages allocated out of the lomem pool per + * alloc_page(GFP_ATOMIC) + */ + va = skb_frag_address(frag); + + /* + * we need the header to contain the greater of either ETH_HLEN or + * 60 bytes if the skb->len is less than 60 for skb_pad. + */ + pull_len = eth_get_headlen(va, TXGBE_RX_HDR_SIZE); + + /* align pull length to size of long to optimize memcpy performance */ + skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long))); + + /* update all of the pointers */ + skb_frag_size_sub(frag, pull_len); + frag->page_offset += pull_len; + skb->data_len -= pull_len; + skb->tail += pull_len; +} + +/** + * txgbe_dma_sync_frag - perform DMA sync for first frag of SKB + * @rx_ring: rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being updated + * + * This function provides a basic DMA sync up for the first fragment of an + * skb. The reason for doing this is that the first fragment cannot be + * unmapped until we have reached the end of packet descriptor for a buffer + * chain. + */ +static void txgbe_dma_sync_frag(struct txgbe_ring *rx_ring, + struct sk_buff *skb) +{ + if (ring_uses_build_skb(rx_ring)) { + unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK; + dma_sync_single_range_for_cpu(rx_ring->dev, + TXGBE_CB(skb)->dma, + offset, + skb_headlen(skb), + DMA_FROM_DEVICE); + } else { + skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + dma_sync_single_range_for_cpu(rx_ring->dev, + TXGBE_CB(skb)->dma, + frag->page_offset, + skb_frag_size(frag), + DMA_FROM_DEVICE); + } + + /* If the page was released, just unmap it. */ + if (unlikely(TXGBE_CB(skb)->page_released)) { + dma_unmap_page_attrs(rx_ring->dev, TXGBE_CB(skb)->dma, + txgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE, + TXGBE_RX_DMA_ATTR); + } +} + +/** + * txgbe_cleanup_headers - Correct corrupted or empty headers + * @rx_ring: rx descriptor ring packet is being transacted on + * @rx_desc: pointer to the EOP Rx descriptor + * @skb: pointer to current skb being fixed + * + * Check for corrupted packet headers caused by senders on the local L2 + * embedded NIC switch not setting up their Tx Descriptors right. These + * should be very rare. + * + * Also address the case where we are pulling data in on pages only + * and as such no data is present in the skb header. + * + * In addition if skb is not at least 60 bytes we need to pad it so that + * it is large enough to qualify as a valid Ethernet frame. + * + * Returns true if an error was encountered and skb was freed. + **/ +static bool txgbe_cleanup_headers(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + struct net_device *netdev = rx_ring->netdev; + + /* verify that the packet does not have any known errors */ + if (unlikely(txgbe_test_staterr(rx_desc, + TXGBE_RXD_ERR_FRAME_ERR_MASK) && + !(netdev->features & NETIF_F_RXALL))) { + dev_kfree_skb_any(skb); + return true; + } + + /* place header in linear portion of buffer */ + if (skb_is_nonlinear(skb) && !skb_headlen(skb)) + txgbe_pull_tail(skb); + + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; +} + +/** + * txgbe_reuse_rx_page - page flip buffer and store it back on the ring + * @rx_ring: rx descriptor ring to store buffers on + * @old_buff: donor buffer to have page reused + * + * Synchronizes page for reuse by the adapter + **/ +static void txgbe_reuse_rx_page(struct txgbe_ring *rx_ring, + struct txgbe_rx_buffer *old_buff) +{ + struct txgbe_rx_buffer *new_buff; + u16 nta = rx_ring->next_to_alloc; + + new_buff = &rx_ring->rx_buffer_info[nta]; + + /* update, and store next to alloc */ + nta++; + rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0; + + /* transfer page from old buffer to new buffer */ + new_buff->page_dma = old_buff->page_dma; + new_buff->page = old_buff->page; + new_buff->page_offset = old_buff->page_offset; + + /* sync the buffer for use by the device */ + dma_sync_single_range_for_device(rx_ring->dev, new_buff->page_dma, + new_buff->page_offset, + txgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); +} + +static inline bool txgbe_page_is_reserved(struct page *page) +{ + return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); +} + +/** + * txgbe_add_rx_frag - Add contents of Rx buffer to sk_buff + * @rx_ring: rx descriptor ring to transact packets on + * @rx_buffer: buffer containing page to add + * @rx_desc: descriptor containing length of buffer written by hardware + * @skb: sk_buff to place the data into + * + * This function will add the data contained in rx_buffer->page to the skb. + * This is done either through a direct copy if the data in the buffer is + * less than the skb header size, otherwise it will just attach the page as + * a frag to the skb. + * + * The function will then update the page offset if necessary and return + * true if the buffer can be reused by the adapter. + **/ +static bool txgbe_add_rx_frag(struct txgbe_ring *rx_ring, + struct txgbe_rx_buffer *rx_buffer, + union txgbe_rx_desc *rx_desc, + struct sk_buff *skb) +{ + struct page *page = rx_buffer->page; + unsigned int size = le16_to_cpu(rx_desc->wb.upper.length); +#if (PAGE_SIZE < 8192) + unsigned int truesize = txgbe_rx_bufsz(rx_ring); +#else + unsigned int truesize = ALIGN(size, L1_CACHE_BYTES); + unsigned int last_offset = txgbe_rx_pg_size(rx_ring) - + txgbe_rx_bufsz(rx_ring); +#endif + + if ((size <= TXGBE_RX_HDR_SIZE) && !skb_is_nonlinear(skb) && + !ring_is_hs_enabled(rx_ring)) { + unsigned char *va = page_address(page) + rx_buffer->page_offset; + + memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); + + /* page is not reserved, we can reuse buffer as-is */ + if (likely(!txgbe_page_is_reserved(page))) + return true; + + /* this page cannot be reused so discard it */ + __free_pages(page, txgbe_rx_pg_order(rx_ring)); + return false; + } + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, + rx_buffer->page_offset, size, truesize); + + /* avoid re-using remote pages */ + if (unlikely(txgbe_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely(page_count(page) != 1)) + return false; + + /* flip page offset to other buffer */ + rx_buffer->page_offset ^= truesize; +#else + /* move offset up to the next cache line */ + rx_buffer->page_offset += truesize; + + if (rx_buffer->page_offset > last_offset) + return false; +#endif + + /* Even if we own the page, we are not allowed to use atomic_set() + * This would break get_page_unless_zero() users. + */ + page_ref_inc(page); + + return true; +} + +static struct sk_buff *txgbe_fetch_rx_buffer(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc) +{ + struct txgbe_rx_buffer *rx_buffer; + struct sk_buff *skb; + struct page *page; + + rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + page = rx_buffer->page; + prefetchw(page); + + skb = rx_buffer->skb; + + if (likely(!skb)) { + void *page_addr = page_address(page) + + rx_buffer->page_offset; + + /* prefetch first cache line of first page */ + prefetch(page_addr); +#if L1_CACHE_BYTES < 128 + prefetch(page_addr + L1_CACHE_BYTES); +#endif + + /* allocate a skb to store the frags */ + skb = netdev_alloc_skb_ip_align(rx_ring->netdev, + TXGBE_RX_HDR_SIZE); + if (unlikely(!skb)) { + rx_ring->rx_stats.alloc_rx_buff_failed++; + return NULL; + } + + /* + * we will be copying header into skb->data in + * pskb_may_pull so it is in our interest to prefetch + * it now to avoid a possible cache miss + */ + prefetchw(skb->data); + + /* + * Delay unmapping of the first packet. It carries the + * header information, HW may still access the header + * after the writeback. Only unmap it when EOP is + * reached + */ + if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP))) + goto dma_sync; + + TXGBE_CB(skb)->dma = rx_buffer->page_dma; + } else { + if (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)) + txgbe_dma_sync_frag(rx_ring, skb); + +dma_sync: + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->page_dma, + rx_buffer->page_offset, + txgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); + + rx_buffer->skb = NULL; + } + + /* pull page into skb */ + if (txgbe_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) { + /* hand second half of page back to the ring */ + txgbe_reuse_rx_page(rx_ring, rx_buffer); + } else if (TXGBE_CB(skb)->dma == rx_buffer->page_dma) { + /* the page has been released from the ring */ + TXGBE_CB(skb)->page_released = true; + } else { + /* we are not reusing the buffer so unmap it */ + dma_unmap_page(rx_ring->dev, rx_buffer->page_dma, + txgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE); + } + + /* clear contents of buffer_info */ + rx_buffer->page = NULL; + + return skb; +} + +static struct sk_buff *txgbe_fetch_rx_buffer_hs(struct txgbe_ring *rx_ring, + union txgbe_rx_desc *rx_desc) +{ + struct txgbe_rx_buffer *rx_buffer; + struct sk_buff *skb; + struct page *page; + int hdr_len = 0; + + rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + page = rx_buffer->page; + prefetchw(page); + + skb = rx_buffer->skb; + rx_buffer->skb = NULL; + prefetchw(skb->data); + + if (!skb_is_nonlinear(skb)) { + hdr_len = txgbe_get_hlen(rx_ring, rx_desc); + if (hdr_len > 0) { + __skb_put(skb, hdr_len); + TXGBE_CB(skb)->dma_released = true; + TXGBE_CB(skb)->dma = rx_buffer->dma; + rx_buffer->dma = 0; + } else { + dma_unmap_single(rx_ring->dev, + rx_buffer->dma, + rx_ring->rx_buf_len, + DMA_FROM_DEVICE); + rx_buffer->dma = 0; + if (likely(txgbe_test_staterr(rx_desc, + TXGBE_RXD_STAT_EOP))) + goto dma_sync; + TXGBE_CB(skb)->dma = rx_buffer->page_dma; + goto add_frag; + } + } + + if (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)) { + if (skb_headlen(skb)) { + if (TXGBE_CB(skb)->dma_released == true) { + dma_unmap_single(rx_ring->dev, + TXGBE_CB(skb)->dma, + rx_ring->rx_buf_len, + DMA_FROM_DEVICE); + TXGBE_CB(skb)->dma = 0; + TXGBE_CB(skb)->dma_released = false; + } + } else + txgbe_dma_sync_frag(rx_ring, skb); + } + +dma_sync: + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, + rx_buffer->page_dma, + rx_buffer->page_offset, + txgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); +add_frag: + /* pull page into skb */ + if (txgbe_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) { + /* hand second half of page back to the ring */ + txgbe_reuse_rx_page(rx_ring, rx_buffer); + } else if (TXGBE_CB(skb)->dma == rx_buffer->page_dma) { + /* the page has been released from the ring */ + TXGBE_CB(skb)->page_released = true; + } else { + /* we are not reusing the buffer so unmap it */ + dma_unmap_page(rx_ring->dev, rx_buffer->page_dma, + txgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE); + } + + /* clear contents of buffer_info */ + rx_buffer->page = NULL; + + return skb; +} + +/** + * txgbe_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf + * @q_vector: structure containing interrupt and ring information + * @rx_ring: rx descriptor ring to transact packets on + * @budget: Total limit on number of packets to process + * + * This function provides a "bounce buffer" approach to Rx interrupt + * processing. The advantage to this is that on systems that have + * expensive overhead for IOMMU access this provides a means of avoiding + * it by maintaining the mapping of the page to the syste. + * + * Returns amount of work completed. + **/ +static int txgbe_clean_rx_irq(struct txgbe_q_vector *q_vector, + struct txgbe_ring *rx_ring, + int budget) +{ + unsigned int total_rx_bytes = 0, total_rx_packets = 0; + u16 cleaned_count = txgbe_desc_unused(rx_ring); + + do { + union txgbe_rx_desc *rx_desc; + struct sk_buff *skb; + + /* return some buffers to hardware, one at a time is too slow */ + if (cleaned_count >= TXGBE_RX_BUFFER_WRITE) { + txgbe_alloc_rx_buffers(rx_ring, cleaned_count); + cleaned_count = 0; + } + + rx_desc = TXGBE_RX_DESC(rx_ring, rx_ring->next_to_clean); + + if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD)) + break; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc until we know the + * descriptor has been written back + */ + dma_rmb(); + + /* retrieve a buffer from the ring */ + if (ring_is_hs_enabled(rx_ring)) + skb = txgbe_fetch_rx_buffer_hs(rx_ring, rx_desc); + else + skb = txgbe_fetch_rx_buffer(rx_ring, rx_desc); + + /* exit if we failed to retrieve a buffer */ + if (!skb) + break; + + cleaned_count++; + + /* place incomplete frames back on ring for completion */ + if (txgbe_is_non_eop(rx_ring, rx_desc, skb)) + continue; + + /* verify the packet layout is correct */ + if (txgbe_cleanup_headers(rx_ring, rx_desc, skb)) + continue; + + /* probably a little skewed due to removing CRC */ + total_rx_bytes += skb->len; + + /* populate checksum, timestamp, VLAN, and protocol */ + txgbe_process_skb_fields(rx_ring, rx_desc, skb); + + txgbe_rx_skb(q_vector, skb); + + /* update budget accounting */ + total_rx_packets++; + } while (likely(total_rx_packets < budget)); + + u64_stats_update_begin(&rx_ring->syncp); + rx_ring->stats.packets += total_rx_packets; + rx_ring->stats.bytes += total_rx_bytes; + u64_stats_update_end(&rx_ring->syncp); + q_vector->rx.total_packets += total_rx_packets; + q_vector->rx.total_bytes += total_rx_bytes; + + return total_rx_packets; +} + +/** + * txgbe_configure_msix - Configure MSI-X hardware + * @adapter: board private structure + * + * txgbe_configure_msix sets up the hardware to properly generate MSI-X + * interrupts. + **/ +static void txgbe_configure_msix(struct txgbe_adapter *adapter) +{ + u16 v_idx; + + /* Populate MSIX to EITR Select */ + if (adapter->num_vfs >= 32) { + u32 eitrsel = (1 << (adapter->num_vfs - 32)) - 1; + wr32(&adapter->hw, TXGBE_PX_ITRSEL, eitrsel); + } else { + wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0); + } + + /* + * Populate the IVAR table and set the ITR values to the + * corresponding register. + */ + for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) { + struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx]; + struct txgbe_ring *ring; + + txgbe_for_each_ring(ring, q_vector->rx) + txgbe_set_ivar(adapter, 0, ring->reg_idx, v_idx); + + txgbe_for_each_ring(ring, q_vector->tx) + txgbe_set_ivar(adapter, 1, ring->reg_idx, v_idx); + + txgbe_write_eitr(q_vector); + } + + txgbe_set_ivar(adapter, -1, 0, v_idx); + + wr32(&adapter->hw, TXGBE_PX_ITR(v_idx), 1950); +} + +enum latency_range { + lowest_latency = 0, + low_latency = 1, + bulk_latency = 2, + latency_invalid = 255 +}; + +/** + * txgbe_update_itr - update the dynamic ITR value based on statistics + * @q_vector: structure containing interrupt and ring information + * @ring_container: structure containing ring performance data + * + * Stores a new ITR value based on packets and byte + * counts during the last interrupt. The advantage of per interrupt + * computation is faster updates and more accurate ITR for the current + * traffic pattern. Constants in this function were computed + * based on theoretical maximum wire speed and thresholds were set based + * on testing data as well as attempting to minimize response time + * while increasing bulk throughput. + * this functionality is controlled by the InterruptThrottleRate module + * parameter (see txgbe_param.c) + **/ +static void txgbe_update_itr(struct txgbe_q_vector *q_vector, + struct txgbe_ring_container *ring_container) +{ + int bytes = ring_container->total_bytes; + int packets = ring_container->total_packets; + u32 timepassed_us; + u64 bytes_perint; + u8 itr_setting = ring_container->itr; + + if (packets == 0) + return; + + /* simple throttlerate management + * 0-10MB/s lowest (100000 ints/s) + * 10-20MB/s low (20000 ints/s) + * 20-1249MB/s bulk (12000 ints/s) + */ + /* what was last interrupt timeslice? */ + timepassed_us = q_vector->itr >> 2; + if (timepassed_us == 0) + return; + bytes_perint = bytes / timepassed_us; /* bytes/usec */ + + switch (itr_setting) { + case lowest_latency: + if (bytes_perint > 10) { + itr_setting = low_latency; + } + break; + case low_latency: + if (bytes_perint > 20) { + itr_setting = bulk_latency; + } else if (bytes_perint <= 10) { + itr_setting = lowest_latency; + } + break; + case bulk_latency: + if (bytes_perint <= 20) { + itr_setting = low_latency; + } + break; + } + + /* clear work counters since we have the values we need */ + ring_container->total_bytes = 0; + ring_container->total_packets = 0; + + /* write updated itr to ring container */ + ring_container->itr = itr_setting; +} + +/** + * txgbe_write_eitr - write EITR register in hardware specific way + * @q_vector: structure containing interrupt and ring information + * + * This function is made to be called by ethtool and by the driver + * when it needs to update EITR registers at runtime. Hardware + * specific quirks/differences are taken care of here. + */ +void txgbe_write_eitr(struct txgbe_q_vector *q_vector) +{ + struct txgbe_adapter *adapter = q_vector->adapter; + struct txgbe_hw *hw = &adapter->hw; + int v_idx = q_vector->v_idx; + u32 itr_reg = q_vector->itr & TXGBE_MAX_EITR; + + itr_reg |= TXGBE_PX_ITR_CNT_WDIS; + + wr32(hw, TXGBE_PX_ITR(v_idx), itr_reg); +} + +static void txgbe_set_itr(struct txgbe_q_vector *q_vector) +{ + u16 new_itr = q_vector->itr; + u8 current_itr; + + txgbe_update_itr(q_vector, &q_vector->tx); + txgbe_update_itr(q_vector, &q_vector->rx); + + current_itr = max(q_vector->rx.itr, q_vector->tx.itr); + + switch (current_itr) { + /* counts and packets in update_itr are dependent on these numbers */ + case lowest_latency: + new_itr = TXGBE_100K_ITR; + break; + case low_latency: + new_itr = TXGBE_20K_ITR; + break; + case bulk_latency: + new_itr = TXGBE_12K_ITR; + break; + default: + break; + } + + if (new_itr != q_vector->itr) { + /* do an exponential smoothing */ + new_itr = (10 * new_itr * q_vector->itr) / + ((9 * new_itr) + q_vector->itr); + + /* save the algorithm value here */ + q_vector->itr = new_itr; + + txgbe_write_eitr(q_vector); + } +} + +/** + * txgbe_check_overtemp_subtask - check for over temperature + * @adapter: pointer to adapter + **/ +static void txgbe_check_overtemp_subtask(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 eicr = adapter->interrupt_event; + s32 temp_state; + + if (test_bit(__TXGBE_DOWN, &adapter->state)) + return; + if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE)) + return; + if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_EVENT)) + return; + + adapter->flags2 &= ~TXGBE_FLAG2_TEMP_SENSOR_EVENT; + + /* + * Since the warning interrupt is for both ports + * we don't have to check if: + * - This interrupt wasn't for our port. + * - We may have missed the interrupt so always have to + * check if we got a LSC + */ + if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT)) + return; + + temp_state = TCALL(hw, phy.ops.check_overtemp); + if (!temp_state || temp_state == TXGBE_NOT_IMPLEMENTED) + return; + + if (temp_state == TXGBE_ERR_UNDERTEMP && + test_bit(__TXGBE_HANGING, &adapter->state)) { + e_crit(drv, "%s\n", txgbe_underheat_msg); + wr32m(&adapter->hw, TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN); + netif_carrier_on(adapter->netdev); + + clear_bit(__TXGBE_HANGING, &adapter->state); + } else if (temp_state == TXGBE_ERR_OVERTEMP && + !test_and_set_bit(__TXGBE_HANGING, &adapter->state)) { + e_crit(drv, "%s\n", txgbe_overheat_msg); + netif_carrier_off(adapter->netdev); + + wr32m(&adapter->hw, TXGBE_RDB_PB_CTL, + TXGBE_RDB_PB_CTL_RXEN, 0); + } + + adapter->interrupt_event = 0; +} + +static void txgbe_check_overtemp_event(struct txgbe_adapter *adapter, u32 eicr) +{ + if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE)) + return; + + if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT)) + return; + + if (!test_bit(__TXGBE_DOWN, &adapter->state)) { + adapter->interrupt_event = eicr; + adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_EVENT; + txgbe_service_event_schedule(adapter); + } +} + +static void txgbe_check_sfp_event(struct txgbe_adapter *adapter, u32 eicr) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 eicr_mask = TXGBE_PX_MISC_IC_GPIO; + u32 reg; + + if (eicr & eicr_mask) { + if (!test_bit(__TXGBE_DOWN, &adapter->state)) { + wr32(hw, TXGBE_GPIO_INTMASK, 0xFF); + reg = rd32(hw, TXGBE_GPIO_INTSTATUS); + if (reg & TXGBE_GPIO_INTSTATUS_2) { + adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET; + wr32(hw, TXGBE_GPIO_EOI, + TXGBE_GPIO_EOI_2); + adapter->sfp_poll_time = 0; + txgbe_service_event_schedule(adapter); + } + if (reg & TXGBE_GPIO_INTSTATUS_3) { + adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG; + wr32(hw, TXGBE_GPIO_EOI, + TXGBE_GPIO_EOI_3); + txgbe_service_event_schedule(adapter); + } + + if (reg & TXGBE_GPIO_INTSTATUS_6) { + wr32(hw, TXGBE_GPIO_EOI, + TXGBE_GPIO_EOI_6); + adapter->flags |= + TXGBE_FLAG_NEED_LINK_CONFIG; + txgbe_service_event_schedule(adapter); + } + wr32(hw, TXGBE_GPIO_INTMASK, 0x0); + } + } +} + +static void txgbe_check_lsc(struct txgbe_adapter *adapter) +{ + adapter->lsc_int++; + adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; + adapter->link_check_timeout = jiffies; + if (!test_bit(__TXGBE_DOWN, &adapter->state)) { + txgbe_service_event_schedule(adapter); + } +} + +/** + * txgbe_irq_enable - Enable default interrupt generation settings + * @adapter: board private structure + **/ +void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush) +{ + u32 mask = 0; + struct txgbe_hw *hw = &adapter->hw; + u8 device_type = hw->subsystem_id & 0xF0; + + /* enable gpio interrupt */ + if (device_type != TXGBE_ID_MAC_XAUI && + device_type != TXGBE_ID_MAC_SGMII) { + mask |= TXGBE_GPIO_INTEN_2; + mask |= TXGBE_GPIO_INTEN_3; + mask |= TXGBE_GPIO_INTEN_6; + } + wr32(&adapter->hw, TXGBE_GPIO_INTEN, mask); + + if (device_type != TXGBE_ID_MAC_XAUI && + device_type != TXGBE_ID_MAC_SGMII) { + mask = TXGBE_GPIO_INTTYPE_LEVEL_2 | TXGBE_GPIO_INTTYPE_LEVEL_3 | + TXGBE_GPIO_INTTYPE_LEVEL_6; + } + wr32(&adapter->hw, TXGBE_GPIO_INTTYPE_LEVEL, mask); + + /* enable misc interrupt */ + mask = TXGBE_PX_MISC_IEN_MASK; + + if (adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE) + mask |= TXGBE_PX_MISC_IEN_OVER_HEAT; + + if ((adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) && + !(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT)) + mask |= TXGBE_PX_MISC_IEN_FLOW_DIR; + + mask |= TXGBE_PX_MISC_IEN_TIMESYNC; + + wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask); + + /* unmask interrupt */ + txgbe_intr_enable(&adapter->hw, TXGBE_INTR_MISC(adapter)); + if (queues) + txgbe_intr_enable(&adapter->hw, TXGBE_INTR_QALL(adapter)); + + /* flush configuration */ + if (flush) + TXGBE_WRITE_FLUSH(&adapter->hw); +} + +static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data) +{ + struct txgbe_adapter *adapter = data; + struct txgbe_hw *hw = &adapter->hw; + u32 eicr; + u32 ecc; + u32 value = 0; + u16 pci_val = 0; + + eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC); + + if (BOND_CHECK_LINK_MODE == 1) { + if (eicr & (TXGBE_PX_MISC_IC_ETH_LKDN)) { + value = rd32(hw, 0x14404); + value = value & 0x1; + if (value == 0) { + adapter->link_up = false; + adapter->flags2 |= TXGBE_FLAG2_LINK_DOWN; + txgbe_service_event_schedule(adapter); + } + } + } else { + if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) + txgbe_check_lsc(adapter); + } + if (eicr & TXGBE_PX_MISC_IC_ETH_AN) { + if (adapter->backplane_an == 1 && (KR_POLLING == 0)) { + value = txgbe_rd32_epcs(hw, 0x78002); + value = value & 0x4; + if (value == 0x4) { + txgbe_kr_intr_handle(adapter); + adapter->flags2 |= TXGBE_FLAG2_KR_TRAINING; + txgbe_service_event_schedule(adapter); + } + } + } + + if (eicr & TXGBE_PX_MISC_IC_PCIE_REQ_ERR) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "lan id %d,PCIe request error founded.\n", hw->bus.lan_id); + + pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &pci_val); + ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", pci_val); + + pci_read_config_word(adapter->pdev, PCI_COMMAND, &pci_val); + ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", pci_val); + + if (hw->bus.lan_id == 0) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + txgbe_service_event_schedule(adapter); + } else + wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1); + } + + if (eicr & TXGBE_PX_MISC_IC_INT_ERR) { + e_info(link, "Received unrecoverable ECC Err," + "initiating reset.\n"); + ecc = rd32(hw, TXGBE_MIS_ST); + if (((ecc & TXGBE_MIS_ST_LAN0_ECC) && (hw->bus.lan_id == 0)) || + ((ecc & TXGBE_MIS_ST_LAN1_ECC) && (hw->bus.lan_id == 1))) + adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED; + + txgbe_service_event_schedule(adapter); + } + if (eicr & TXGBE_PX_MISC_IC_DEV_RST) { + adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED; + txgbe_service_event_schedule(adapter); + } + if ((eicr & TXGBE_PX_MISC_IC_STALL) || + (eicr & TXGBE_PX_MISC_IC_ETH_EVENT)) { + adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED; + txgbe_service_event_schedule(adapter); + } + + /* Handle Flow Director Full threshold interrupt */ + if (eicr & TXGBE_PX_MISC_IC_FLOW_DIR) { + int reinit_count = 0; + int i; + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *ring = adapter->tx_ring[i]; + if (test_and_clear_bit(__TXGBE_TX_FDIR_INIT_DONE, + &ring->state)) + reinit_count++; + } + if (reinit_count) { + /* no more flow director interrupts until after init */ + wr32m(hw, TXGBE_PX_MISC_IEN, + TXGBE_PX_MISC_IEN_FLOW_DIR, 0); + adapter->flags2 |= + TXGBE_FLAG2_FDIR_REQUIRES_REINIT; + txgbe_service_event_schedule(adapter); + } + } + + txgbe_check_sfp_event(adapter, eicr); + txgbe_check_overtemp_event(adapter, eicr); + + if (unlikely(eicr & TXGBE_PX_MISC_IC_TIMESYNC)) + txgbe_ptp_check_pps_event(adapter); + + /* re-enable the original interrupt state, no lsc, no queues */ + if (!test_bit(__TXGBE_DOWN, &adapter->state)) + txgbe_irq_enable(adapter, false, false); + + return IRQ_HANDLED; +} + +static irqreturn_t txgbe_msix_clean_rings(int __always_unused irq, void *data) +{ + struct txgbe_q_vector *q_vector = data; + + /* EIAM disabled interrupts (on this vector) for us */ + + if (q_vector->rx.ring || q_vector->tx.ring) + napi_schedule_irqoff(&q_vector->napi); + + return IRQ_HANDLED; +} + +/** + * txgbe_poll - NAPI polling RX/TX cleanup routine + * @napi: napi struct with our devices info in it + * @budget: amount of work driver is allowed to do this pass, in packets + * + * This function will clean all queues associated with a q_vector. + **/ +int txgbe_poll(struct napi_struct *napi, int budget) +{ + struct txgbe_q_vector *q_vector = + container_of(napi, struct txgbe_q_vector, napi); + struct txgbe_adapter *adapter = q_vector->adapter; + struct txgbe_ring *ring; + int per_ring_budget; + bool clean_complete = true; + + txgbe_for_each_ring(ring, q_vector->tx) { + if (!txgbe_clean_tx_irq(q_vector, ring)) + clean_complete = false; + } + + /* Exit if we are called by netpoll */ + if (budget <= 0) + return budget; + + /* attempt to distribute budget to each queue fairly, but don't allow + * the budget to go below 1 because we'll exit polling */ + if (q_vector->rx.count > 1) + per_ring_budget = max(budget/q_vector->rx.count, 1); + else + per_ring_budget = budget; + + txgbe_for_each_ring(ring, q_vector->rx) { + int cleaned = txgbe_clean_rx_irq(q_vector, ring, + per_ring_budget); + + if (cleaned >= per_ring_budget) + clean_complete = false; + } + + /* If all work not completed, return budget and keep polling */ + if (!clean_complete) + return budget; + + /* all work done, exit the polling mode */ + napi_complete(napi); + if (adapter->rx_itr_setting == 1) + txgbe_set_itr(q_vector); + if (!test_bit(__TXGBE_DOWN, &adapter->state)) + txgbe_intr_enable(&adapter->hw, + TXGBE_INTR_Q(q_vector->v_idx)); + + return 0; +} + +/** + * txgbe_request_msix_irqs - Initialize MSI-X interrupts + * @adapter: board private structure + * + * txgbe_request_msix_irqs allocates MSI-X vectors and requests + * interrupts from the kernel. + **/ +static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + int vector, err; + int ri = 0, ti = 0; + + for (vector = 0; vector < adapter->num_q_vectors; vector++) { + struct txgbe_q_vector *q_vector = adapter->q_vector[vector]; + struct msix_entry *entry = &adapter->msix_entries[vector]; + + if (q_vector->tx.ring && q_vector->rx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-TxRx-%d", netdev->name, ri++); + ti++; + } else if (q_vector->rx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-rx-%d", netdev->name, ri++); + } else if (q_vector->tx.ring) { + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-tx-%d", netdev->name, ti++); + } else { + /* skip this unused q_vector */ + continue; + } + err = request_irq(entry->vector, &txgbe_msix_clean_rings, 0, + q_vector->name, q_vector); + if (err) { + e_err(probe, "request_irq failed for MSIX interrupt" + " '%s' Error: %d\n", q_vector->name, err); + goto free_queue_irqs; + } + + /* If Flow Director is enabled, set interrupt affinity */ + if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) { + /* assign the mask for this irq */ + irq_set_affinity_hint(entry->vector, + &q_vector->affinity_mask); + } + } + + err = request_irq(adapter->msix_entries[vector].vector, + txgbe_msix_other, 0, netdev->name, adapter); + if (err) { + e_err(probe, "request_irq for msix_other failed: %d\n", err); + goto free_queue_irqs; + } + + return 0; + +free_queue_irqs: + while (vector) { + vector--; + irq_set_affinity_hint(adapter->msix_entries[vector].vector, + NULL); + free_irq(adapter->msix_entries[vector].vector, + adapter->q_vector[vector]); + } + adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED; + pci_disable_msix(adapter->pdev); + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + return err; +} + +/** + * txgbe_intr - legacy mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a network interface device structure + **/ +static irqreturn_t txgbe_intr(int __always_unused irq, void *data) +{ + struct txgbe_adapter *adapter = data; + struct txgbe_q_vector *q_vector = adapter->q_vector[0]; + struct txgbe_hw *hw = &adapter->hw; + u32 eicr; + u32 eicr_misc; + u32 value ; + + eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0); + if (!eicr) { + /* + * shared interrupt alert! + * the interrupt that we masked before the EICR read. + */ + if (!test_bit(__TXGBE_DOWN, &adapter->state)) + txgbe_irq_enable(adapter, true, true); + return IRQ_NONE; /* Not our interrupt */ + } + adapter->isb_mem[TXGBE_ISB_VEC0] = 0; + if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED)) + wr32(&(adapter->hw), TXGBE_PX_INTA, 1); + + eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC); + if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN)) + txgbe_check_lsc(adapter); + + if (eicr_misc & TXGBE_PX_MISC_IC_ETH_AN) { + if (adapter->backplane_an == 1 && (KR_POLLING == 0)) { + value = txgbe_rd32_epcs(hw, 0x78002); + value = value & 0x4; + if (value == 0x4) { + txgbe_kr_intr_handle(adapter); + adapter->flags2 |= TXGBE_FLAG2_KR_TRAINING; + txgbe_service_event_schedule(adapter); + } + } + } + + if (eicr_misc & TXGBE_PX_MISC_IC_INT_ERR) { + e_info(link, "Received unrecoverable ECC Err," + "initiating reset.\n"); + adapter->flags2 |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED; + txgbe_service_event_schedule(adapter); + } + + if (eicr_misc & TXGBE_PX_MISC_IC_DEV_RST) { + adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED; + txgbe_service_event_schedule(adapter); + } + txgbe_check_sfp_event(adapter, eicr_misc); + txgbe_check_overtemp_event(adapter, eicr_misc); + + if (unlikely(eicr_misc & TXGBE_PX_MISC_IC_TIMESYNC)) + txgbe_ptp_check_pps_event(adapter); + + adapter->isb_mem[TXGBE_ISB_MISC] = 0; + /* would disable interrupts here but it is auto disabled */ + napi_schedule_irqoff(&q_vector->napi); + + /* + * re-enable link(maybe) and non-queue interrupts, no flush. + * txgbe_poll will re-enable the queue interrupts + */ + if (!test_bit(__TXGBE_DOWN, &adapter->state)) + txgbe_irq_enable(adapter, false, false); + + return IRQ_HANDLED; +} + +/** + * txgbe_request_irq - initialize interrupts + * @adapter: board private structure + * + * Attempts to configure interrupts using the best available + * capabilities of the hardware and kernel. + **/ +static int txgbe_request_irq(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + int err; + + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) + err = txgbe_request_msix_irqs(adapter); + else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) + err = request_irq(adapter->pdev->irq, &txgbe_intr, 0, + netdev->name, adapter); + else + err = request_irq(adapter->pdev->irq, &txgbe_intr, IRQF_SHARED, + netdev->name, adapter); + + if (err) + e_err(probe, "request_irq failed, Error %d\n", err); + + return err; +} + +static void txgbe_free_irq(struct txgbe_adapter *adapter) +{ + int vector; + + if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) { + free_irq(adapter->pdev->irq, adapter); + return; + } + + for (vector = 0; vector < adapter->num_q_vectors; vector++) { + struct txgbe_q_vector *q_vector = adapter->q_vector[vector]; + struct msix_entry *entry = &adapter->msix_entries[vector]; + + /* free only the irqs that were actually requested */ + if (!q_vector->rx.ring && !q_vector->tx.ring) + continue; + + /* clear the affinity_mask in the IRQ descriptor */ + irq_set_affinity_hint(entry->vector, NULL); + free_irq(entry->vector, q_vector); + } + + free_irq(adapter->msix_entries[vector++].vector, adapter); +} + +/** + * txgbe_irq_disable - Mask off interrupt generation on the NIC + * @adapter: board private structure + **/ +void txgbe_irq_disable(struct txgbe_adapter *adapter) +{ + wr32(&adapter->hw, TXGBE_PX_MISC_IEN, 0); + txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL); + + TXGBE_WRITE_FLUSH(&adapter->hw); + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) { + int vector; + + for (vector = 0; vector < adapter->num_q_vectors; vector++) + synchronize_irq(adapter->msix_entries[vector].vector); + + synchronize_irq(adapter->msix_entries[vector++].vector); + } else { + synchronize_irq(adapter->pdev->irq); + } +} + +/** + * txgbe_configure_msi_and_legacy - Initialize PIN (INTA...) and MSI interrupts + * + **/ +static void txgbe_configure_msi_and_legacy(struct txgbe_adapter *adapter) +{ + struct txgbe_q_vector *q_vector = adapter->q_vector[0]; + struct txgbe_ring *ring; + + txgbe_write_eitr(q_vector); + + txgbe_for_each_ring(ring, q_vector->rx) + txgbe_set_ivar(adapter, 0, ring->reg_idx, 0); + + txgbe_for_each_ring(ring, q_vector->tx) + txgbe_set_ivar(adapter, 1, ring->reg_idx, 0); + + txgbe_set_ivar(adapter, -1, 0, 1); + + e_info(hw, "Legacy interrupt IVAR setup done\n"); +} + +/** + * txgbe_configure_tx_ring - Configure Tx ring after Reset + * @adapter: board private structure + * @ring: structure containing ring specific data + * + * Configure the Tx descriptor ring after a reset. + **/ +void txgbe_configure_tx_ring(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u64 tdba = ring->dma; + int wait_loop = 10; + u32 txdctl = TXGBE_PX_TR_CFG_ENABLE; + u8 reg_idx = ring->reg_idx; + + /* disable queue to avoid issues while updating state */ + wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH); + TXGBE_WRITE_FLUSH(hw); + + wr32(hw, TXGBE_PX_TR_BAL(reg_idx), tdba & DMA_BIT_MASK(32)); + wr32(hw, TXGBE_PX_TR_BAH(reg_idx), tdba >> 32); + + /* reset head and tail pointers */ + wr32(hw, TXGBE_PX_TR_RP(reg_idx), 0); + wr32(hw, TXGBE_PX_TR_WP(reg_idx), 0); + ring->tail = adapter->io_addr + TXGBE_PX_TR_WP(reg_idx); + + /* reset ntu and ntc to place SW in sync with hardwdare */ + ring->next_to_clean = 0; + ring->next_to_use = 0; + + txdctl |= TXGBE_RING_SIZE(ring) << TXGBE_PX_TR_CFG_TR_SIZE_SHIFT; + + /* + * set WTHRESH to encourage burst writeback, it should not be set + * higher than 1 when: + * - ITR is 0 as it could cause false TX hangs + * - ITR is set to > 100k int/sec and BQL is enabled + * + * In order to avoid issues WTHRESH + PTHRESH should always be equal + * to or less than the number of on chip descriptors, which is + * currently 40. + */ + + txdctl |= 0x20 << TXGBE_PX_TR_CFG_WTHRESH_SHIFT; + + /* reinitialize flowdirector state */ + if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) { + ring->atr_sample_rate = adapter->atr_sample_rate; + ring->atr_count = 0; + set_bit(__TXGBE_TX_FDIR_INIT_DONE, &ring->state); + } else { + ring->atr_sample_rate = 0; + } + + /* initialize XPS */ + if (!test_and_set_bit(__TXGBE_TX_XPS_INIT_DONE, &ring->state)) { + struct txgbe_q_vector *q_vector = ring->q_vector; + + if (q_vector) + netif_set_xps_queue(adapter->netdev, + &q_vector->affinity_mask, + ring->queue_index); + } + + clear_bit(__TXGBE_HANG_CHECK_ARMED, &ring->state); + + /* enable queue */ + wr32(hw, TXGBE_PX_TR_CFG(reg_idx), txdctl); + + + /* poll to verify queue is enabled */ + do { + msleep(1); + txdctl = rd32(hw, TXGBE_PX_TR_CFG(reg_idx)); + } while (--wait_loop && !(txdctl & TXGBE_PX_TR_CFG_ENABLE)); + if (!wait_loop) + e_err(drv, "Could not enable Tx Queue %d\n", reg_idx); +} + +/** + * txgbe_configure_tx - Configure Transmit Unit after Reset + * @adapter: board private structure + * + * Configure the Tx unit of the MAC after a reset. + **/ +static void txgbe_configure_tx(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 i; + + /* TDM_CTL.TE must be before Tx queues are enabled */ + wr32m(hw, TXGBE_TDM_CTL, + TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE); + + /* Setup the HW Tx Head and Tail descriptor pointers */ + for (i = 0; i < adapter->num_tx_queues; i++) + txgbe_configure_tx_ring(adapter, adapter->tx_ring[i]); + + wr32m(hw, TXGBE_TSC_BUF_AE, 0x3FF, 0x10); + /* enable mac transmitter */ + wr32m(hw, TXGBE_MAC_TX_CFG, + TXGBE_MAC_TX_CFG_TE, TXGBE_MAC_TX_CFG_TE); +} + +static void txgbe_enable_rx_drop(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u16 reg_idx = ring->reg_idx; + + u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + + srrctl |= TXGBE_PX_RR_CFG_DROP_EN; + + wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl); +} + +static void txgbe_disable_rx_drop(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u16 reg_idx = ring->reg_idx; + + u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + + srrctl &= ~TXGBE_PX_RR_CFG_DROP_EN; + + wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl); +} + +void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter) +{ + int i; + + /* + * We should set the drop enable bit if: + * SR-IOV is enabled + * or + * Number of Rx queues > 1 and flow control is disabled + * + * This allows us to avoid head of line blocking for security + * and performance reasons. + */ + if (adapter->num_vfs || (adapter->num_rx_queues > 1 && + !(adapter->hw.fc.current_mode & txgbe_fc_tx_pause))) { + for (i = 0; i < adapter->num_rx_queues; i++) + txgbe_enable_rx_drop(adapter, adapter->rx_ring[i]); + } else { + for (i = 0; i < adapter->num_rx_queues; i++) + txgbe_disable_rx_drop(adapter, adapter->rx_ring[i]); + } +} + +static void txgbe_configure_srrctl(struct txgbe_adapter *adapter, + struct txgbe_ring *rx_ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 srrctl; + u16 reg_idx = rx_ring->reg_idx; + + srrctl = rd32m(hw, TXGBE_PX_RR_CFG(reg_idx), + ~(TXGBE_PX_RR_CFG_RR_HDR_SZ | + TXGBE_PX_RR_CFG_RR_BUF_SZ | + TXGBE_PX_RR_CFG_SPLIT_MODE)); + /* configure header buffer length, needed for RSC */ + srrctl |= TXGBE_RX_HDR_SIZE << TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT; + + /* configure the packet buffer length */ + srrctl |= txgbe_rx_bufsz(rx_ring) >> TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT; + if (ring_is_hs_enabled(rx_ring)) + srrctl |= TXGBE_PX_RR_CFG_SPLIT_MODE; + + wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl); +} + +/** + * Return a number of entries in the RSS indirection table + * + * @adapter: device handle + * + */ +u32 txgbe_rss_indir_tbl_entries(struct txgbe_adapter *adapter) +{ + return 128; +} + +/** + * Write the RETA table to HW + * + * @adapter: device handle + * + * Write the RSS redirection table stored in adapter.rss_indir_tbl[] to HW. + */ +void txgbe_store_reta(struct txgbe_adapter *adapter) +{ + u32 i, reta_entries = txgbe_rss_indir_tbl_entries(adapter); + struct txgbe_hw *hw = &adapter->hw; + u32 reta = 0; + u8 *indir_tbl = adapter->rss_indir_tbl; + + /* Fill out the redirection table as follows: + * - 8 bit wide entries containing 4 bit RSS index + */ + + /* Write redirection table to HW */ + for (i = 0; i < reta_entries; i++) { + reta |= indir_tbl[i] << (i & 0x3) * 8; + if ((i & 3) == 3) { + wr32(hw, TXGBE_RDB_RSSTBL(i >> 2), reta); + reta = 0; + } + } +} + +/** + * Write the RETA table to HW (for devices in SRIOV mode) + * + * @adapter: device handle + * + * Write the RSS redirection table stored in adapter.rss_indir_tbl[] to HW. + */ +//static void txgbe_store_vfreta(struct txgbe_adapter *adapter) + +static void txgbe_setup_reta(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 i, j; + u32 reta_entries = txgbe_rss_indir_tbl_entries(adapter); + u16 rss_i = adapter->ring_feature[RING_F_RSS].indices; + + /* + * Program table for at least 4 queues w/ SR-IOV so that VFs can + * make full use of any rings they may have. We will use the + * PSRTYPE register to control how many rings we use within the PF. + */ + if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2)) + rss_i = 2; + + /* Fill out hash function seeds */ + for (i = 0; i < 10; i++) + wr32(hw, TXGBE_RDB_RSSRK(i), adapter->rss_key[i]); + + /* Fill out redirection table */ + memset(adapter->rss_indir_tbl, 0, sizeof(adapter->rss_indir_tbl)); + + for (i = 0, j = 0; i < reta_entries; i++, j++) { + if (j == rss_i) + j = 0; + + adapter->rss_indir_tbl[i] = j; + } + + txgbe_store_reta(adapter); +} + +static void txgbe_setup_mrqc(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 rss_field = 0; + + /* VT, DCB and RSS do not coexist at the same time */ + if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED && + adapter->flags & TXGBE_FLAG_DCB_ENABLED) + return; + + /* Disable indicating checksum in descriptor, enables RSS hash */ + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_PCSD, TXGBE_PSR_CTL_PCSD); + + /* Perform hash on these packet types */ + rss_field = TXGBE_RDB_RA_CTL_RSS_IPV4 | + TXGBE_RDB_RA_CTL_RSS_IPV4_TCP | + TXGBE_RDB_RA_CTL_RSS_IPV6 | + TXGBE_RDB_RA_CTL_RSS_IPV6_TCP; + + if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP) + rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP; + if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP) + rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP; + + netdev_rss_key_fill(adapter->rss_key, sizeof(adapter->rss_key)); + + if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) { + /*wait to fix txgbe_setup_vfreta(adapter);*/ + txgbe_setup_reta(adapter); + } else { + txgbe_setup_reta(adapter); + } + + if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) + rss_field |= TXGBE_RDB_RA_CTL_RSS_EN; + + wr32(hw, TXGBE_RDB_RA_CTL, rss_field); +} + +/** + * txgbe_clear_rscctl - disable RSC for the indicated ring + * @adapter: address of board private structure + * @ring: structure containing ring specific data + **/ +void txgbe_clear_rscctl(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u8 reg_idx = ring->reg_idx; + + wr32m(hw, TXGBE_PX_RR_CFG(reg_idx), + TXGBE_PX_RR_CFG_RSC, 0); + + clear_ring_rsc_enabled(ring); +} + +/** + * txgbe_configure_rscctl - enable RSC for the indicated ring + * @adapter: address of board private structure + * @ring: structure containing ring specific data + **/ +void txgbe_configure_rscctl(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 rscctrl; + u8 reg_idx = ring->reg_idx; + + if (!ring_is_rsc_enabled(ring)) + return; + + rscctrl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + rscctrl |= TXGBE_PX_RR_CFG_RSC; + /* + * we must limit the number of descriptors so that the + * total size of max desc * buf_len is not greater + * than 65536 + */ +#if (MAX_SKB_FRAGS >= 16) + rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_16; +#elif (MAX_SKB_FRAGS >= 8) + rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_8; +#elif (MAX_SKB_FRAGS >= 4) + rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_4; +#else + rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_1; +#endif + wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rscctrl); +} + +static void txgbe_rx_desc_queue_enable(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + int wait_loop = TXGBE_MAX_RX_DESC_POLL; + u32 rxdctl; + u8 reg_idx = ring->reg_idx; + + if (TXGBE_REMOVED(hw->hw_addr)) + return; + + do { + msleep(1); + rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + } while (--wait_loop && !(rxdctl & TXGBE_PX_RR_CFG_RR_EN)); + + if (!wait_loop) { + e_err(drv, "RXDCTL.ENABLE on Rx queue %d " + "not set within the polling period\n", reg_idx); + } +} + +/* disable the specified tx ring/queue */ +void txgbe_disable_tx_queue(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + int wait_loop = TXGBE_MAX_RX_DESC_POLL; + u32 rxdctl, reg_offset, enable_mask; + u8 reg_idx = ring->reg_idx; + + if (TXGBE_REMOVED(hw->hw_addr)) + return; + + reg_offset = TXGBE_PX_TR_CFG(reg_idx); + enable_mask = TXGBE_PX_TR_CFG_ENABLE; + + /* write value back with TDCFG.ENABLE bit cleared */ + wr32m(hw, reg_offset, enable_mask, 0); + + /* the hardware may take up to 100us to really disable the tx queue */ + do { + udelay(10); + rxdctl = rd32(hw, reg_offset); + } while (--wait_loop && (rxdctl & enable_mask)); + + if (!wait_loop) { + e_err(drv, "TDCFG.ENABLE on Tx queue %d not cleared within " + "the polling period\n", reg_idx); + } +} + +/* disable the specified rx ring/queue */ +void txgbe_disable_rx_queue(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + int wait_loop = TXGBE_MAX_RX_DESC_POLL; + u32 rxdctl; + u8 reg_idx = ring->reg_idx; + + if (TXGBE_REMOVED(hw->hw_addr)) + return; + + /* write value back with RXDCTL.ENABLE bit cleared */ + wr32m(hw, TXGBE_PX_RR_CFG(reg_idx), + TXGBE_PX_RR_CFG_RR_EN, 0); + + /* the hardware may take up to 100us to really disable the rx queue */ + do { + udelay(10); + rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + } while (--wait_loop && (rxdctl & TXGBE_PX_RR_CFG_RR_EN)); + + if (!wait_loop) { + e_err(drv, "RXDCTL.ENABLE on Rx queue %d not cleared within " + "the polling period\n", reg_idx); + } +} + +void txgbe_configure_rx_ring(struct txgbe_adapter *adapter, + struct txgbe_ring *ring) +{ + struct txgbe_hw *hw = &adapter->hw; + u64 rdba = ring->dma; + u32 rxdctl; + u16 reg_idx = ring->reg_idx; + + /* disable queue to avoid issues while updating state */ + rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx)); + txgbe_disable_rx_queue(adapter, ring); + + wr32(hw, TXGBE_PX_RR_BAL(reg_idx), rdba & DMA_BIT_MASK(32)); + wr32(hw, TXGBE_PX_RR_BAH(reg_idx), rdba >> 32); + + if (ring->count == TXGBE_MAX_RXD) + rxdctl |= 0 << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT; + else + rxdctl |= (ring->count / 128) << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT; + + rxdctl |= 0x1 << TXGBE_PX_RR_CFG_RR_THER_SHIFT; + wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rxdctl); + + /* reset head and tail pointers */ + wr32(hw, TXGBE_PX_RR_RP(reg_idx), 0); + wr32(hw, TXGBE_PX_RR_WP(reg_idx), 0); + ring->tail = adapter->io_addr + TXGBE_PX_RR_WP(reg_idx); + + /* reset ntu and ntc to place SW in sync with hardwdare */ + ring->next_to_clean = 0; + ring->next_to_use = 0; + ring->next_to_alloc = 0; + + txgbe_configure_srrctl(adapter, ring); + /* In ESX, RSCCTL configuration is done by on demand */ + txgbe_configure_rscctl(adapter, ring); + + /* enable receive descriptor ring */ + wr32m(hw, TXGBE_PX_RR_CFG(reg_idx), + TXGBE_PX_RR_CFG_RR_EN, TXGBE_PX_RR_CFG_RR_EN); + + txgbe_rx_desc_queue_enable(adapter, ring); + txgbe_alloc_rx_buffers(ring, txgbe_desc_unused(ring)); +} + +static void txgbe_setup_psrtype(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int rss_i = adapter->ring_feature[RING_F_RSS].indices; + int pool; + + /* PSRTYPE must be initialized in adapters */ + u32 psrtype = TXGBE_RDB_PL_CFG_L4HDR | + TXGBE_RDB_PL_CFG_L3HDR | + TXGBE_RDB_PL_CFG_L2HDR | + TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR | + TXGBE_RDB_PL_CFG_TUN_TUNHDR; + + if (rss_i > 3) + psrtype |= 2 << 29; + else if (rss_i > 1) + psrtype |= 1 << 29; + + for_each_set_bit(pool, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) + wr32(hw, TXGBE_RDB_PL_CFG(VMDQ_P(pool)), psrtype); +} + +/** + * txgbe_configure_bridge_mode - common settings for configuring bridge mode + * @adapter - the private structure + * + * This function's purpose is to remove code duplication and configure some + * settings require to switch bridge modes. + **/ +static void txgbe_configure_bridge_mode(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + unsigned int p; + + if (adapter->flags & TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE) { + /* disable Tx loopback, rely on switch hairpin mode */ + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_SW_EN, 0); + + /* enable Rx source address pruning. Note, this requires + * replication to be enabled or else it does nothing. + */ + for (p = 0; p < adapter->num_vfs; p++) { + TCALL(hw, mac.ops.set_source_address_pruning, true, p); + } + + for_each_set_bit(p, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) { + TCALL(hw, mac.ops.set_source_address_pruning, true, VMDQ_P(p)); + } + } else { + /* enable Tx loopback for internal VF/PF communication */ + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_SW_EN, TXGBE_PSR_CTL_SW_EN); + + /* disable Rx source address pruning, since we don't expect to + * be receiving external loopback of our transmitted frames. + */ + for (p = 0; p < adapter->num_vfs; p++) { + TCALL(hw, mac.ops.set_source_address_pruning, false, p); + } + + for_each_set_bit(p, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) { + TCALL(hw, mac.ops.set_source_address_pruning, false, VMDQ_P(p)); + } + } +} + +static void txgbe_configure_virtualization(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 reg_offset, vf_shift; + u32 i; + + if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) + return; + + wr32m(hw, TXGBE_PSR_VM_CTL, + TXGBE_PSR_VM_CTL_POOL_MASK | + TXGBE_PSR_VM_CTL_REPLEN, + VMDQ_P(0) << TXGBE_PSR_VM_CTL_POOL_SHIFT | + TXGBE_PSR_VM_CTL_REPLEN); + + for_each_set_bit(i, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) { + /* accept untagged packets until a vlan tag is + * specifically set for the VMDQ queue/pool + */ + wr32m(hw, TXGBE_PSR_VM_L2CTL(i), + TXGBE_PSR_VM_L2CTL_AUPE, TXGBE_PSR_VM_L2CTL_AUPE); + } + + vf_shift = VMDQ_P(0) % 32; + reg_offset = (VMDQ_P(0) >= 32) ? 1 : 0; + + /* Enable only the PF pools for Tx/Rx */ + wr32(hw, TXGBE_RDM_VF_RE(reg_offset), (~0) << vf_shift); + wr32(hw, TXGBE_RDM_VF_RE(reg_offset ^ 1), reg_offset - 1); + wr32(hw, TXGBE_TDM_VF_TE(reg_offset), (~0) << vf_shift); + wr32(hw, TXGBE_TDM_VF_TE(reg_offset ^ 1), reg_offset - 1); + + if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)) + return; + + /* configure default bridge settings */ + txgbe_configure_bridge_mode(adapter); + + /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be + * calling set_ethertype_anti_spoofing for each VF in loop below. + */ + if (hw->mac.ops.set_ethertype_anti_spoofing) { + wr32(hw, + TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_LLDP), + (TXGBE_PSR_ETYPE_SWC_FILTER_EN | /* enable filter */ + TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF | + TXGBE_ETH_P_LLDP)); /* LLDP eth procotol type */ + + wr32(hw, + TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_FC), + (TXGBE_PSR_ETYPE_SWC_FILTER_EN | + TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF | + ETH_P_PAUSE)); + } + + for (i = 0; i < adapter->num_vfs; i++) { + /* enable ethertype anti spoofing if hw supports it */ + TCALL(hw, mac.ops.set_ethertype_anti_spoofing, true, i); + } +} + +static void txgbe_set_rx_buffer_len(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct net_device *netdev = adapter->netdev; + u32 max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN; + struct txgbe_ring *rx_ring; + int i; + u32 mhadd; + + /* adjust max frame to be at least the size of a standard frame */ + if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN)) + max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN); + + mhadd = rd32(hw, TXGBE_PSR_MAX_SZ); + if (max_frame != mhadd) { + wr32(hw, TXGBE_PSR_MAX_SZ, max_frame); + } + + /* + * Setup the HW Rx Head and Tail Descriptor Pointers and + * the Base and Length of the Rx Descriptor Ring + */ + for (i = 0; i < adapter->num_rx_queues; i++) { + rx_ring = adapter->rx_ring[i]; + + if (adapter->flags & TXGBE_FLAG_RX_HS_ENABLED) { + rx_ring->rx_buf_len = TXGBE_RX_HDR_SIZE; + set_ring_hs_enabled(rx_ring); + } else + clear_ring_hs_enabled(rx_ring); + + if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) + set_ring_rsc_enabled(rx_ring); + else + clear_ring_rsc_enabled(rx_ring); + } +} + +/** + * txgbe_configure_rx - Configure Receive Unit after Reset + * @adapter: board private structure + * + * Configure the Rx unit of the MAC after a reset. + **/ +static void txgbe_configure_rx(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i; + u32 rxctrl, psrctl; + + /* disable receives while setting up the descriptors */ + TCALL(hw, mac.ops.disable_rx); + + txgbe_setup_psrtype(adapter); + + /* enable hw crc stripping */ + wr32m(hw, TXGBE_RSC_CTL, + TXGBE_RSC_CTL_CRC_STRIP, TXGBE_RSC_CTL_CRC_STRIP); + + /* RSC Setup */ + psrctl = rd32m(hw, TXGBE_PSR_CTL, ~TXGBE_PSR_CTL_RSC_DIS); + psrctl |= TXGBE_PSR_CTL_RSC_ACK; /* Disable RSC for ACK packets */ + if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) + psrctl |= TXGBE_PSR_CTL_RSC_DIS; + wr32(hw, TXGBE_PSR_CTL, psrctl); + + /* Program registers for the distribution of queues */ + txgbe_setup_mrqc(adapter); + + /* set_rx_buffer_len must be called before ring initialization */ + txgbe_set_rx_buffer_len(adapter); + + /* + * Setup the HW Rx Head and Tail Descriptor Pointers and + * the Base and Length of the Rx Descriptor Ring + */ + for (i = 0; i < adapter->num_rx_queues; i++) + txgbe_configure_rx_ring(adapter, adapter->rx_ring[i]); + + rxctrl = rd32(hw, TXGBE_RDB_PB_CTL); + + /* enable all receives */ + rxctrl |= TXGBE_RDB_PB_CTL_RXEN; + TCALL(hw, mac.ops.enable_rx_dma, rxctrl); +} + +static int txgbe_vlan_rx_add_vid(struct net_device *netdev, + __always_unused __be16 proto, u16 vid) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + int pool_ndx = VMDQ_P(0); + + /* add VID to filter table */ + if (hw->mac.ops.set_vfta) { + if (vid < VLAN_N_VID) + set_bit(vid, adapter->active_vlans); + TCALL(hw, mac.ops.set_vfta, vid, pool_ndx, true); + if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) { + int i; + /* enable vlan id for all pools */ + for_each_set_bit(i, &adapter->fwd_bitmask, + TXGBE_MAX_MACVLANS) + TCALL(hw, mac.ops.set_vfta, vid, + VMDQ_P(i), true); + } + } + + return 0; +} + +static int txgbe_vlan_rx_kill_vid(struct net_device *netdev, + __always_unused __be16 proto, u16 vid) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + int pool_ndx = VMDQ_P(0); + + /* User is not allowed to remove vlan ID 0 */ + if (!vid) + return 0; + + /* remove VID from filter table */ + if (hw->mac.ops.set_vfta) { + TCALL(hw, mac.ops.set_vfta, vid, pool_ndx, false); + if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) { + int i; + /* remove vlan id from all pools */ + for_each_set_bit(i, &adapter->fwd_bitmask, + TXGBE_MAX_MACVLANS) + TCALL(hw, mac.ops.set_vfta, vid, + VMDQ_P(i), false); + } + } + + clear_bit(vid, adapter->active_vlans); + + return 0; +} + +#ifdef HAVE_8021P_SUPPORT +/** + * txgbe_vlan_strip_disable - helper to disable vlan tag stripping + * @adapter: driver data + */ +void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i, j; + + /* leave vlan tag stripping enabled for DCB */ + if (adapter->flags & TXGBE_FLAG_DCB_ENABLED) + return; + + for (i = 0; i < adapter->num_rx_queues; i++) { + struct txgbe_ring *ring = adapter->rx_ring[i]; + if (ring->accel) + continue; + j = ring->reg_idx; + wr32m(hw, TXGBE_PX_RR_CFG(j), + TXGBE_PX_RR_CFG_VLAN, 0); + } +} + +#endif +/** + * txgbe_vlan_strip_enable - helper to enable vlan tag stripping + * @adapter: driver data + */ +void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i, j; + + for (i = 0; i < adapter->num_rx_queues; i++) { + struct txgbe_ring *ring = adapter->rx_ring[i]; + if (ring->accel) + continue; + j = ring->reg_idx; + wr32m(hw, TXGBE_PX_RR_CFG(j), + TXGBE_PX_RR_CFG_VLAN, TXGBE_PX_RR_CFG_VLAN); + } +} + +void txgbe_vlan_mode(struct net_device *netdev, u32 features) +{ +#if defined(HAVE_8021P_SUPPORT) + struct txgbe_adapter *adapter = netdev_priv(netdev); +#endif +#ifdef HAVE_8021P_SUPPORT + bool enable; +#endif + +#ifdef HAVE_8021P_SUPPORT + enable = !!(features & (NETIF_F_HW_VLAN_CTAG_RX)); + + if (enable) + /* enable VLAN tag insert/strip */ + txgbe_vlan_strip_enable(adapter); + else + /* disable VLAN tag insert/strip */ + txgbe_vlan_strip_disable(adapter); + +#endif /* HAVE_8021P_SUPPORT */ +} + +static void txgbe_restore_vlan(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + u16 vid; + + txgbe_vlan_mode(netdev, netdev->features); + + for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID) + txgbe_vlan_rx_add_vid(netdev, htons(ETH_P_8021Q), vid); +} + +static u8 *txgbe_addr_list_itr(struct txgbe_hw __maybe_unused *hw, + u8 **mc_addr_ptr, u32 *vmdq) +{ + struct netdev_hw_addr *mc_ptr; + u8 *addr = *mc_addr_ptr; + + /* VMDQ_P implicitely uses the adapter struct when CONFIG_PCI_IOV is + * defined, so we have to wrap the pointer above correctly to prevent + * a warning. + */ + *vmdq = VMDQ_P(0); + + mc_ptr = container_of(addr, struct netdev_hw_addr, addr[0]); + if (mc_ptr->list.next) { + struct netdev_hw_addr *ha; + ha = list_entry(mc_ptr->list.next, struct netdev_hw_addr, list); + *mc_addr_ptr = ha->addr; + } else + *mc_addr_ptr = NULL; + + return addr; +} + +/** + * txgbe_write_mc_addr_list - write multicast addresses to MTA + * @netdev: network interface device structure + * + * Writes multicast address list to the MTA hash table. + * Returns: -ENOMEM on failure + * 0 on no addresses written + * X on writing X addresses to MTA + **/ +int txgbe_write_mc_addr_list(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + struct netdev_hw_addr *ha; + u8 *addr_list = NULL; + int addr_count = 0; + + if (!hw->mac.ops.update_mc_addr_list) + return -ENOMEM; + + if (!netif_running(netdev)) + return 0; + + + if (netdev_mc_empty(netdev)) { + TCALL(hw, mac.ops.update_mc_addr_list, NULL, 0, + txgbe_addr_list_itr, true); + } else { + ha = list_first_entry(&netdev->mc.list, + struct netdev_hw_addr, list); + addr_list = ha->addr; + addr_count = netdev_mc_count(netdev); + + TCALL(hw, mac.ops.update_mc_addr_list, addr_list, addr_count, + txgbe_addr_list_itr, true); + } + + return addr_count; +} + + +void txgbe_full_sync_mac_table(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i; + for (i = 0; i < hw->mac.num_rar_entries; i++) { + if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) { + TCALL(hw, mac.ops.set_rar, i, + adapter->mac_table[i].addr, + adapter->mac_table[i].pools, + TXGBE_PSR_MAC_SWC_AD_H_AV); + } else { + TCALL(hw, mac.ops.clear_rar, i); + } + adapter->mac_table[i].state &= ~(TXGBE_MAC_STATE_MODIFIED); + } +} + +static void txgbe_sync_mac_table(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i; + for (i = 0; i < hw->mac.num_rar_entries; i++) { + if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) { + if (adapter->mac_table[i].state & + TXGBE_MAC_STATE_IN_USE) { + TCALL(hw, mac.ops.set_rar, i, + adapter->mac_table[i].addr, + adapter->mac_table[i].pools, + TXGBE_PSR_MAC_SWC_AD_H_AV); + } else { + TCALL(hw, mac.ops.clear_rar, i); + } + adapter->mac_table[i].state &= + ~(TXGBE_MAC_STATE_MODIFIED); + } + } +} + +int txgbe_available_rars(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 i, count = 0; + + for (i = 0; i < hw->mac.num_rar_entries; i++) { + if (adapter->mac_table[i].state == 0) + count++; + } + return count; +} + +/* this function destroys the first RAR entry */ +static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter, + u8 *addr) +{ + struct txgbe_hw *hw = &adapter->hw; + + memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN); + adapter->mac_table[0].pools = 1ULL << VMDQ_P(0); + adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT | + TXGBE_MAC_STATE_IN_USE); + TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr, + adapter->mac_table[0].pools, + TXGBE_PSR_MAC_SWC_AD_H_AV); +} + +int txgbe_add_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 i; + + if (is_zero_ether_addr(addr)) + return -EINVAL; + + for (i = 0; i < hw->mac.num_rar_entries; i++) { + if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) { + continue; + } + adapter->mac_table[i].state |= (TXGBE_MAC_STATE_MODIFIED | + TXGBE_MAC_STATE_IN_USE); + memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN); + adapter->mac_table[i].pools = (1ULL << pool); + txgbe_sync_mac_table(adapter); + return i; + } + return -ENOMEM; +} + +static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter) +{ + u32 i; + struct txgbe_hw *hw = &adapter->hw; + + for (i = 0; i < hw->mac.num_rar_entries; i++) { + adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; + memset(adapter->mac_table[i].addr, 0, ETH_ALEN); + adapter->mac_table[i].pools = 0; + } + txgbe_sync_mac_table(adapter); +} + +int txgbe_del_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool) +{ + /* search table for addr, if found, set to 0 and sync */ + u32 i; + struct txgbe_hw *hw = &adapter->hw; + + if (is_zero_ether_addr(addr)) + return -EINVAL; + + for (i = 0; i < hw->mac.num_rar_entries; i++) { + if (ether_addr_equal(addr, adapter->mac_table[i].addr) && + adapter->mac_table[i].pools | (1ULL << pool)) { + adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED; + adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE; + memset(adapter->mac_table[i].addr, 0, ETH_ALEN); + adapter->mac_table[i].pools = 0; + txgbe_sync_mac_table(adapter); + return 0; + } + } + return -ENOMEM; +} + +/** + * txgbe_write_uc_addr_list - write unicast addresses to RAR table + * @netdev: network interface device structure + * + * Writes unicast address list to the RAR table. + * Returns: -ENOMEM on failure/insufficient address space + * 0 on no addresses written + * X on writing X addresses to the RAR table + **/ +int txgbe_write_uc_addr_list(struct net_device *netdev, int pool) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + int count = 0; + + /* return ENOMEM indicating insufficient memory for addresses */ + if (netdev_uc_count(netdev) > txgbe_available_rars(adapter)) + return -ENOMEM; + + if (!netdev_uc_empty(netdev)) { + struct netdev_hw_addr *ha; + + netdev_for_each_uc_addr(ha, netdev) { + txgbe_del_mac_filter(adapter, ha->addr, pool); + txgbe_add_mac_filter(adapter, ha->addr, pool); + count++; + } + } + return count; +} + +int txgbe_add_cloud_switcher(struct txgbe_adapter *adapter, u32 key, u16 pool) +{ + struct txgbe_hw *hw = &adapter->hw; + + UNREFERENCED_PARAMETER(pool); + + wr32(hw, TXGBE_PSR_CL_SWC_IDX, 0); + wr32(hw, TXGBE_PSR_CL_SWC_KEY, key); + wr32(hw, TXGBE_PSR_CL_SWC_CTL, + TXGBE_PSR_CL_SWC_CTL_VLD | TXGBE_PSR_CL_SWC_CTL_DST_MSK); + wr32(hw, TXGBE_PSR_CL_SWC_VM_L, 0x1); + wr32(hw, TXGBE_PSR_CL_SWC_VM_H, 0x0); + + return 0; +} + +int txgbe_del_cloud_switcher(struct txgbe_adapter *adapter, u32 key, u16 pool) +{ + /* search table for addr, if found, set to 0 and sync */ + struct txgbe_hw *hw = &adapter->hw; + + UNREFERENCED_PARAMETER(key); + UNREFERENCED_PARAMETER(pool); + + wr32(hw, TXGBE_PSR_CL_SWC_IDX, 0); + wr32(hw, TXGBE_PSR_CL_SWC_CTL, 0); + + return 0; +} + +/** + * txgbe_set_rx_mode - Unicast, Multicast and Promiscuous mode set + * @netdev: network interface device structure + * + * The set_rx_method entry point is called whenever the unicast/multicast + * address list or the network interface flags are updated. This routine is + * responsible for configuring the hardware for proper unicast, multicast and + * promiscuous mode. + **/ +void txgbe_set_rx_mode(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u32 fctrl, vmolr, vlnctrl; + int count; + + /* Check for Promiscuous and All Multicast modes */ + fctrl = rd32m(hw, TXGBE_PSR_CTL, + ~(TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE)); + vmolr = rd32m(hw, TXGBE_PSR_VM_L2CTL(VMDQ_P(0)), + ~(TXGBE_PSR_VM_L2CTL_UPE | + TXGBE_PSR_VM_L2CTL_MPE | + TXGBE_PSR_VM_L2CTL_ROPE | + TXGBE_PSR_VM_L2CTL_ROMPE)); + vlnctrl = rd32m(hw, TXGBE_PSR_VLAN_CTL, + ~(TXGBE_PSR_VLAN_CTL_VFE | + TXGBE_PSR_VLAN_CTL_CFIEN)); + + /* set all bits that we expect to always be set */ + fctrl |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_MFE; + vmolr |= TXGBE_PSR_VM_L2CTL_BAM | + TXGBE_PSR_VM_L2CTL_AUPE | + TXGBE_PSR_VM_L2CTL_VACC; + vlnctrl |= TXGBE_PSR_VLAN_CTL_VFE; + + hw->addr_ctrl.user_set_promisc = false; + if (netdev->flags & IFF_PROMISC) { + hw->addr_ctrl.user_set_promisc = true; + fctrl |= (TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE); + /* pf don't want packets routing to vf, so clear UPE */ + vmolr |= TXGBE_PSR_VM_L2CTL_MPE; + vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE; + } + + if (netdev->flags & IFF_ALLMULTI) { + fctrl |= TXGBE_PSR_CTL_MPE; + vmolr |= TXGBE_PSR_VM_L2CTL_MPE; + } + + /* This is useful for sniffing bad packets. */ + if (netdev->features & NETIF_F_RXALL) { + vmolr |= (TXGBE_PSR_VM_L2CTL_UPE | TXGBE_PSR_VM_L2CTL_MPE); + vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE; + /* receive bad packets */ + wr32m(hw, TXGBE_RSC_CTL, + TXGBE_RSC_CTL_SAVE_MAC_ERR, + TXGBE_RSC_CTL_SAVE_MAC_ERR); + } else { + vmolr |= TXGBE_PSR_VM_L2CTL_ROPE | TXGBE_PSR_VM_L2CTL_ROMPE; + } + + /* + * Write addresses to available RAR registers, if there is not + * sufficient space to store all the addresses then enable + * unicast promiscuous mode + */ + count = txgbe_write_uc_addr_list(netdev, VMDQ_P(0)); + if (count < 0) { + vmolr &= ~TXGBE_PSR_VM_L2CTL_ROPE; + vmolr |= TXGBE_PSR_VM_L2CTL_UPE; + } + + /* + * Write addresses to the MTA, if the attempt fails + * then we should just turn on promiscuous mode so + * that we can at least receive multicast traffic + */ + count = txgbe_write_mc_addr_list(netdev); + if (count < 0) { + vmolr &= ~TXGBE_PSR_VM_L2CTL_ROMPE; + vmolr |= TXGBE_PSR_VM_L2CTL_MPE; + } + + wr32(hw, TXGBE_PSR_VLAN_CTL, vlnctrl); + wr32(hw, TXGBE_PSR_CTL, fctrl); + wr32(hw, TXGBE_PSR_VM_L2CTL(VMDQ_P(0)), vmolr); + + if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) + txgbe_vlan_strip_enable(adapter); + else + txgbe_vlan_strip_disable(adapter); + + /* enable cloud switch */ + if (adapter->flags2 & TXGBE_FLAG2_CLOUD_SWITCH_ENABLED) { + txgbe_add_cloud_switcher(adapter, 0x10, 0); + } +} + +static void txgbe_napi_enable_all(struct txgbe_adapter *adapter) +{ + struct txgbe_q_vector *q_vector; + int q_idx; + + for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) { + q_vector = adapter->q_vector[q_idx]; + napi_enable(&q_vector->napi); + } +} + +static void txgbe_napi_disable_all(struct txgbe_adapter *adapter) +{ + struct txgbe_q_vector *q_vector; + int q_idx; + + for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) { + q_vector = adapter->q_vector[q_idx]; + napi_disable(&q_vector->napi); + } +} + +void txgbe_clear_vxlan_port(struct txgbe_adapter *adapter) +{ + adapter->vxlan_port = 0; + if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) + return; + wr32(&adapter->hw, TXGBE_CFG_VXLAN, 0); +} + +#define TXGBE_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \ + NETIF_F_GSO_GRE_CSUM | \ + NETIF_F_GSO_IPXIP4 | \ + NETIF_F_GSO_IPXIP6 | \ + NETIF_F_GSO_UDP_TUNNEL | \ + NETIF_F_GSO_UDP_TUNNEL_CSUM) + +static inline unsigned long txgbe_tso_features(void) +{ + unsigned long features = 0; + + features |= NETIF_F_TSO; + features |= NETIF_F_TSO6; + features |= NETIF_F_GSO_PARTIAL | TXGBE_GSO_PARTIAL_FEATURES; + + return features; +} + +static void txgbe_configure_lli(struct txgbe_adapter *adapter) +{ + /* lli should only be enabled with MSI-X and MSI */ + if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED) && + !(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) + return; + + if (adapter->lli_etype) { + wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0), + (TXGBE_RDB_5T_CTL1_LLI | + TXGBE_RDB_5T_CTL1_SIZE_BP)); + wr32(&adapter->hw, TXGBE_RDB_ETYPE_CLS(0), + TXGBE_RDB_ETYPE_CLS_LLI); + wr32(&adapter->hw, TXGBE_PSR_ETYPE_SWC(0), + (adapter->lli_etype | + TXGBE_PSR_ETYPE_SWC_FILTER_EN)); + } + + if (adapter->lli_port) { + wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0), + (TXGBE_RDB_5T_CTL1_LLI | + TXGBE_RDB_5T_CTL1_SIZE_BP)); + wr32(&adapter->hw, TXGBE_RDB_5T_CTL0(0), + (TXGBE_RDB_5T_CTL0_POOL_MASK_EN | + (TXGBE_RDB_5T_CTL0_PRIORITY_MASK << + TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT) | + (TXGBE_RDB_5T_CTL0_DEST_PORT_MASK << + TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT))); + + wr32(&adapter->hw, TXGBE_RDB_5T_SDP(0), + (adapter->lli_port << 16)); + } + + if (adapter->lli_size) { + wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0), + TXGBE_RDB_5T_CTL1_LLI); + wr32m(&adapter->hw, TXGBE_RDB_LLI_THRE, + TXGBE_RDB_LLI_THRE_SZ(~0), adapter->lli_size); + wr32(&adapter->hw, TXGBE_RDB_5T_CTL0(0), + (TXGBE_RDB_5T_CTL0_POOL_MASK_EN | + (TXGBE_RDB_5T_CTL0_PRIORITY_MASK << + TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT) | + (TXGBE_RDB_5T_CTL0_5TUPLE_MASK_MASK << + TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT))); + } + + if (adapter->lli_vlan_pri) { + wr32m(&adapter->hw, TXGBE_RDB_LLI_THRE, + TXGBE_RDB_LLI_THRE_PRIORITY_EN | + TXGBE_RDB_LLI_THRE_UP(~0), + TXGBE_RDB_LLI_THRE_PRIORITY_EN | + (adapter->lli_vlan_pri << TXGBE_RDB_LLI_THRE_UP_SHIFT)); + } +} + +/* Additional bittime to account for TXGBE framing */ +#define TXGBE_ETH_FRAMING 20 + +/* + * txgbe_hpbthresh - calculate high water mark for flow control + * + * @adapter: board private structure to calculate for + * @pb - packet buffer to calculate + */ +static int txgbe_hpbthresh(struct txgbe_adapter *adapter, int pb) +{ + struct txgbe_hw *hw = &adapter->hw; + struct net_device *dev = adapter->netdev; + int link, tc, kb, marker; + u32 dv_id, rx_pba; + + /* Calculate max LAN frame size */ + tc = link = dev->mtu + ETH_HLEN + ETH_FCS_LEN + TXGBE_ETH_FRAMING; + + /* Calculate delay value for device */ + dv_id = TXGBE_DV(link, tc); + + /* Loopback switch introduces additional latency */ + if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) + dv_id += TXGBE_B2BT(tc); + + /* Delay value is calculated in bit times convert to KB */ + kb = TXGBE_BT2KB(dv_id); + rx_pba = rd32(hw, TXGBE_RDB_PB_SZ(pb)) + >> TXGBE_RDB_PB_SZ_SHIFT; + + marker = rx_pba - kb; + + /* It is possible that the packet buffer is not large enough + * to provide required headroom. In this case throw an error + * to user and a do the best we can. + */ + if (marker < 0) { + e_warn(drv, "Packet Buffer(%i) can not provide enough" + "headroom to suppport flow control." + "Decrease MTU or number of traffic classes\n", pb); + marker = tc + 1; + } + + return marker; +} + +/* + * txgbe_lpbthresh - calculate low water mark for for flow control + * + * @adapter: board private structure to calculate for + * @pb - packet buffer to calculate + */ +static int txgbe_lpbthresh(struct txgbe_adapter *adapter, int __maybe_unused pb) +{ + struct net_device *dev = adapter->netdev; + int tc; + u32 dv_id; + + /* Calculate max LAN frame size */ + tc = dev->mtu + ETH_HLEN + ETH_FCS_LEN; + + /* Calculate delay value for device */ + dv_id = TXGBE_LOW_DV(tc); + + /* Delay value is calculated in bit times convert to KB */ + return TXGBE_BT2KB(dv_id); +} + +/* + * txgbe_pbthresh_setup - calculate and setup high low water marks + */ +static void txgbe_pbthresh_setup(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int num_tc = netdev_get_num_tc(adapter->netdev); + int i; + + if (!num_tc) + num_tc = 1; + + + for (i = 0; i < num_tc; i++) { + hw->fc.high_water[i] = txgbe_hpbthresh(adapter, i); + hw->fc.low_water[i] = txgbe_lpbthresh(adapter, i); + + /* Low water marks must not be larger than high water marks */ + if (hw->fc.low_water[i] > hw->fc.high_water[i]) + hw->fc.low_water[i] = 0; + } + + for (; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++) + hw->fc.high_water[i] = 0; +} + +static void txgbe_configure_pb(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int hdrm; + int tc = netdev_get_num_tc(adapter->netdev); + + if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE || + adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE) + hdrm = 32 << adapter->fdir_pballoc; + else + hdrm = 0; + + TCALL(hw, mac.ops.setup_rxpba, tc, hdrm, PBA_STRATEGY_EQUAL); + txgbe_pbthresh_setup(adapter); +} + +static void txgbe_fdir_filter_restore(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct hlist_node *node; + struct txgbe_fdir_filter *filter; + + spin_lock(&adapter->fdir_perfect_lock); + + if (!hlist_empty(&adapter->fdir_filter_list)) + txgbe_fdir_set_input_mask(hw, &adapter->fdir_mask, + adapter->cloud_mode); + + hlist_for_each_entry_safe(filter, node, + &adapter->fdir_filter_list, fdir_node) { + txgbe_fdir_write_perfect_filter(hw, + &filter->filter, + filter->sw_idx, + (filter->action == TXGBE_RDB_FDIR_DROP_QUEUE) ? + TXGBE_RDB_FDIR_DROP_QUEUE : + adapter->rx_ring[filter->action]->reg_idx, + adapter->cloud_mode); + } + + spin_unlock(&adapter->fdir_perfect_lock); +} + +void txgbe_configure_isb(struct txgbe_adapter *adapter) +{ + /* set ISB Address */ + struct txgbe_hw *hw = &adapter->hw; + + wr32(hw, TXGBE_PX_ISB_ADDR_L, + adapter->isb_dma & DMA_BIT_MASK(32)); + wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32); +} + +void txgbe_configure_port(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 value, i; + u8 tcs = netdev_get_num_tc(adapter->netdev); + + if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) { + if (tcs > 4) + /* 8 TCs */ + value = TXGBE_CFG_PORT_CTL_NUM_TC_8 | + TXGBE_CFG_PORT_CTL_NUM_VT_16 | + TXGBE_CFG_PORT_CTL_DCB_EN; + else if (tcs > 1) + /* 4 TCs */ + value = TXGBE_CFG_PORT_CTL_NUM_TC_4 | + TXGBE_CFG_PORT_CTL_NUM_VT_32 | + TXGBE_CFG_PORT_CTL_DCB_EN; + else if (adapter->ring_feature[RING_F_RSS].indices == 4) + value = TXGBE_CFG_PORT_CTL_NUM_VT_32; + else /* adapter->ring_feature[RING_F_RSS].indices <= 2 */ + value = TXGBE_CFG_PORT_CTL_NUM_VT_64; + } else { + if (tcs > 4) + value = TXGBE_CFG_PORT_CTL_NUM_TC_8 | + TXGBE_CFG_PORT_CTL_DCB_EN; + else if (tcs > 1) + value = TXGBE_CFG_PORT_CTL_NUM_TC_4 | + TXGBE_CFG_PORT_CTL_DCB_EN; + else + value = 0; + } + + value |= TXGBE_CFG_PORT_CTL_D_VLAN | TXGBE_CFG_PORT_CTL_QINQ; + wr32m(hw, TXGBE_CFG_PORT_CTL, + TXGBE_CFG_PORT_CTL_NUM_TC_MASK | + TXGBE_CFG_PORT_CTL_NUM_VT_MASK | + TXGBE_CFG_PORT_CTL_DCB_EN | + TXGBE_CFG_PORT_CTL_D_VLAN | + TXGBE_CFG_PORT_CTL_QINQ, + value); + + wr32(hw, TXGBE_CFG_TAG_TPID(0), + ETH_P_8021Q | ETH_P_8021AD << 16); + adapter->hw.tpid[0] = ETH_P_8021Q; + adapter->hw.tpid[1] = ETH_P_8021AD; + for (i = 1; i < 4; i++) + wr32(hw, TXGBE_CFG_TAG_TPID(i), + ETH_P_8021Q | ETH_P_8021Q << 16); + for (i = 2; i < 8; i++) + adapter->hw.tpid[i] = ETH_P_8021Q; +} + +static void txgbe_configure(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + + txgbe_configure_pb(adapter); + + /* + * We must restore virtualization before VLANs or else + * the VLVF registers will not be populated + */ + txgbe_configure_virtualization(adapter); + txgbe_configure_port(adapter); + + txgbe_set_rx_mode(adapter->netdev); + txgbe_restore_vlan(adapter); + + TCALL(hw, mac.ops.disable_sec_rx_path); + + if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) { + txgbe_init_fdir_signature(&adapter->hw, + adapter->fdir_pballoc); + } else if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE) { + txgbe_init_fdir_perfect(&adapter->hw, + adapter->fdir_pballoc, + adapter->cloud_mode); + txgbe_fdir_filter_restore(adapter); + } + + TCALL(hw, mac.ops.enable_sec_rx_path); + + TCALL(hw, mac.ops.setup_eee, + (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE) && + (adapter->flags2 & TXGBE_FLAG2_EEE_ENABLED)); + + txgbe_configure_tx(adapter); + txgbe_configure_rx(adapter); + txgbe_configure_isb(adapter); +} + +static bool txgbe_is_sfp(struct txgbe_hw *hw) +{ + switch (TCALL(hw, mac.ops.get_media_type)) { + case txgbe_media_type_fiber: + return true; + default: + return false; + } +} + +static bool txgbe_is_backplane(struct txgbe_hw *hw) +{ + switch (TCALL(hw, mac.ops.get_media_type)) { + case txgbe_media_type_backplane: + return true; + default: + return false; + } +} + +/** + * txgbe_sfp_link_config - set up SFP+ link + * @adapter: pointer to private adapter struct + **/ +static void txgbe_sfp_link_config(struct txgbe_adapter *adapter) +{ + /* + * We are assuming the worst case scenerio here, and that + * is that an SFP was inserted/removed after the reset + * but before SFP detection was enabled. As such the best + * solution is to just start searching as soon as we start + */ + + adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET; + adapter->sfp_poll_time = 0; +} + +/** + * txgbe_non_sfp_link_config - set up non-SFP+ link + * @hw: pointer to private hardware struct + * + * Returns 0 on success, negative on failure + **/ +static int txgbe_non_sfp_link_config(struct txgbe_hw *hw) +{ + u32 speed; + bool autoneg, link_up = false; + u32 ret = TXGBE_ERR_LINK_SETUP; + + ret = TCALL(hw, mac.ops.check_link, &speed, &link_up, false); + + if (ret) + goto link_cfg_out; + + if (link_up) + return 0; + + if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) { + /* setup external PHY Mac Interface */ + mtdSetMacInterfaceControl(&hw->phy_dev, hw->phy.addr, MTD_MAC_TYPE_XAUI, + MTD_FALSE, MTD_MAC_SNOOP_OFF, + 0, MTD_MAC_SPEED_1000_MBPS, + MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED, + MTD_TRUE, MTD_TRUE); + + speed = hw->phy.autoneg_advertised; + if (!speed) + ret = TCALL(hw, mac.ops.get_link_capabilities, &speed, + &autoneg); + if (ret) + goto link_cfg_out; + } else { + speed = TXGBE_LINK_SPEED_10GB_FULL; + autoneg = false; + } + + ret = TCALL(hw, mac.ops.setup_link, speed, autoneg); + +link_cfg_out: + return ret; +} + +/** + * txgbe_clear_vf_stats_counters - Clear out VF stats after reset + * @adapter: board private structure + * + * On a reset we need to clear out the VF stats or accounting gets + * messed up because they're not clear on read. + **/ +static void txgbe_clear_vf_stats_counters(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i; + + for (i = 0; i < adapter->num_vfs; i++) { + adapter->vfinfo[i].last_vfstats.gprc = + rd32(hw, TXGBE_VX_GPRC(i)); + adapter->vfinfo[i].saved_rst_vfstats.gprc += + adapter->vfinfo[i].vfstats.gprc; + adapter->vfinfo[i].vfstats.gprc = 0; + adapter->vfinfo[i].last_vfstats.gptc = + rd32(hw, TXGBE_VX_GPTC(i)); + adapter->vfinfo[i].saved_rst_vfstats.gptc += + adapter->vfinfo[i].vfstats.gptc; + adapter->vfinfo[i].vfstats.gptc = 0; + adapter->vfinfo[i].last_vfstats.gorc = + rd32(hw, TXGBE_VX_GORC_LSB(i)); + adapter->vfinfo[i].saved_rst_vfstats.gorc += + adapter->vfinfo[i].vfstats.gorc; + adapter->vfinfo[i].vfstats.gorc = 0; + adapter->vfinfo[i].last_vfstats.gotc = + rd32(hw, TXGBE_VX_GOTC_LSB(i)); + adapter->vfinfo[i].saved_rst_vfstats.gotc += + adapter->vfinfo[i].vfstats.gotc; + adapter->vfinfo[i].vfstats.gotc = 0; + adapter->vfinfo[i].last_vfstats.mprc = + rd32(hw, TXGBE_VX_MPRC(i)); + adapter->vfinfo[i].saved_rst_vfstats.mprc += + adapter->vfinfo[i].vfstats.mprc; + adapter->vfinfo[i].vfstats.mprc = 0; + } +} + +static void txgbe_setup_gpie(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 gpie = 0; + + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) { + gpie = TXGBE_PX_GPIE_MODEL; + /* + * use EIAM to auto-mask when MSI-X interrupt is asserted + * this saves a register write for every interrupt + */ + } else { + /* legacy interrupts, use EIAM to auto-mask when reading EICR, + * specifically only auto mask tx and rx interrupts */ + } + + wr32(hw, TXGBE_PX_GPIE, gpie); +} + +static void txgbe_up_complete(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int err; + u32 links_reg; + u16 value; + + txgbe_get_hw_control(adapter); + txgbe_setup_gpie(adapter); + + if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) + txgbe_configure_msix(adapter); + else + txgbe_configure_msi_and_legacy(adapter); + + /* enable the optics for SFP+ fiber */ + TCALL(hw, mac.ops.enable_tx_laser); + + smp_mb__before_atomic(); + clear_bit(__TXGBE_DOWN, &adapter->state); + txgbe_napi_enable_all(adapter); + + txgbe_configure_lli(adapter); + + if (txgbe_is_sfp(hw)) { + txgbe_sfp_link_config(adapter); + } else if (txgbe_is_backplane(hw)) { + adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG; + txgbe_service_event_schedule(adapter); + } else { + err = txgbe_non_sfp_link_config(hw); + if (err) + e_err(probe, "link_config FAILED %d\n", err); + } + + links_reg = rd32(hw, TXGBE_CFG_PORT_ST); + if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) { + if (links_reg & TXGBE_CFG_PORT_ST_LINK_10G) { + wr32(hw, TXGBE_MAC_TX_CFG, + (rd32(hw, TXGBE_MAC_TX_CFG) & + ~TXGBE_MAC_TX_CFG_SPEED_MASK) | + TXGBE_MAC_TX_CFG_SPEED_10G); + } else if (links_reg & (TXGBE_CFG_PORT_ST_LINK_1G | TXGBE_CFG_PORT_ST_LINK_100M)) { + wr32(hw, TXGBE_MAC_TX_CFG, + (rd32(hw, TXGBE_MAC_TX_CFG) & + ~TXGBE_MAC_TX_CFG_SPEED_MASK) | + TXGBE_MAC_TX_CFG_SPEED_1G); + } + } + + /* clear any pending interrupts, may auto mask */ + rd32(hw, TXGBE_PX_IC(0)); + rd32(hw, TXGBE_PX_IC(1)); + rd32(hw, TXGBE_PX_MISC_IC); + txgbe_irq_enable(adapter, true, true); + + /* enable external PHY interrupt */ + if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) { + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8011, &value); + /* only enable T unit int */ + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xf043, 0x1); + /* active high */ + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xf041, 0x0); + /* enable AN complete and link status change int */ + txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8010, 0xc00); + } + + /* enable transmits */ + netif_tx_start_all_queues(adapter->netdev); + + /* bring the link up in the watchdog, this could race with our first + * link up interrupt but shouldn't be a problem */ + adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; + adapter->link_check_timeout = jiffies; + + mod_timer(&adapter->service_timer, jiffies); + txgbe_clear_vf_stats_counters(adapter); + + /* Set PF Reset Done bit so PF/VF Mail Ops can work */ + wr32m(hw, TXGBE_CFG_PORT_CTL, + TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD); +} + +void txgbe_reinit_locked(struct txgbe_adapter *adapter) +{ + WARN_ON(in_interrupt()); + /* put off any impending NetWatchDogTimeout */ + netif_trans_update(adapter->netdev); + + while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state)) + usleep_range(1000, 2000); + txgbe_down(adapter); + /* + * If SR-IOV enabled then wait a bit before bringing the adapter + * back up to give the VFs time to respond to the reset. The + * two second wait is based upon the watchdog timer cycle in + * the VF driver. + */ + if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) + msleep(2000); + txgbe_up(adapter); + clear_bit(__TXGBE_RESETTING, &adapter->state); +} + +void txgbe_up(struct txgbe_adapter *adapter) +{ + /* hardware has been reset, we need to reload some things */ + txgbe_configure(adapter); + + txgbe_up_complete(adapter); +} + +void txgbe_reset(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct net_device *netdev = adapter->netdev; + int err; + u8 old_addr[ETH_ALEN]; + + if (TXGBE_REMOVED(hw->hw_addr)) + return; + /* lock SFP init bit to prevent race conditions with the watchdog */ + while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state)) + usleep_range(1000, 2000); + + /* clear all SFP and link config related flags while holding SFP_INIT */ + adapter->flags2 &= ~(TXGBE_FLAG2_SEARCH_FOR_SFP | + TXGBE_FLAG2_SFP_NEEDS_RESET); + adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG; + + err = TCALL(hw, mac.ops.init_hw); + switch (err) { + case 0: + case TXGBE_ERR_SFP_NOT_PRESENT: + case TXGBE_ERR_SFP_NOT_SUPPORTED: + break; + case TXGBE_ERR_MASTER_REQUESTS_PENDING: + e_dev_err("master disable timed out\n"); + break; + case TXGBE_ERR_EEPROM_VERSION: + /* We are running on a pre-production device, log a warning */ + e_dev_warn("This device is a pre-production adapter/LOM. " + "Please be aware there may be issues associated " + "with your hardware. If you are experiencing " + "problems please contact your hardware " + "representative who provided you with this " + "hardware.\n"); + break; + default: + e_dev_err("Hardware Error: %d\n", err); + } + + clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state); + /* do not flush user set addresses */ + memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len); + txgbe_flush_sw_mac_table(adapter); + txgbe_mac_set_default_filter(adapter, old_addr); + + /* update SAN MAC vmdq pool selection */ + TCALL(hw, mac.ops.set_vmdq_san_mac, VMDQ_P(0)); + + /* Clear saved DMA coalescing values except for watchdog_timer */ + hw->mac.dmac_config.fcoe_en = false; + hw->mac.dmac_config.link_speed = 0; + hw->mac.dmac_config.fcoe_tc = 0; + hw->mac.dmac_config.num_tcs = 0; + + if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) + txgbe_ptp_reset(adapter); +} + +/** + * txgbe_clean_rx_ring - Free Rx Buffers per Queue + * @rx_ring: ring to free buffers from + **/ +static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring) +{ + struct device *dev = rx_ring->dev; + unsigned long size; + u16 i; + + /* ring already cleared, nothing to do */ + if (!rx_ring->rx_buffer_info) + return; + + /* Free all the Rx ring sk_buffs */ + for (i = 0; i < rx_ring->count; i++) { + struct txgbe_rx_buffer *rx_buffer = &rx_ring->rx_buffer_info[i]; + if (rx_buffer->dma) { + dma_unmap_single(dev, + rx_buffer->dma, + rx_ring->rx_buf_len, + DMA_FROM_DEVICE); + rx_buffer->dma = 0; + } + + if (rx_buffer->skb) { + struct sk_buff *skb = rx_buffer->skb; + if (TXGBE_CB(skb)->dma_released) { + dma_unmap_single(dev, + TXGBE_CB(skb)->dma, + rx_ring->rx_buf_len, + DMA_FROM_DEVICE); + TXGBE_CB(skb)->dma = 0; + TXGBE_CB(skb)->dma_released = false; + } + + if (TXGBE_CB(skb)->page_released) + dma_unmap_page(dev, + TXGBE_CB(skb)->dma, + txgbe_rx_bufsz(rx_ring), + DMA_FROM_DEVICE); + dev_kfree_skb(skb); + rx_buffer->skb = NULL; + } + + if (!rx_buffer->page) + continue; + + dma_unmap_page(dev, rx_buffer->page_dma, + txgbe_rx_pg_size(rx_ring), + DMA_FROM_DEVICE); + + __free_pages(rx_buffer->page, + txgbe_rx_pg_order(rx_ring)); + rx_buffer->page = NULL; + } + + size = sizeof(struct txgbe_rx_buffer) * rx_ring->count; + memset(rx_ring->rx_buffer_info, 0, size); + + /* Zero out the descriptor ring */ + memset(rx_ring->desc, 0, rx_ring->size); + + rx_ring->next_to_alloc = 0; + rx_ring->next_to_clean = 0; + rx_ring->next_to_use = 0; +} + +/** + * txgbe_clean_tx_ring - Free Tx Buffers + * @tx_ring: ring to be cleaned + **/ +static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring) +{ + struct txgbe_tx_buffer *tx_buffer_info; + unsigned long size; + u16 i; + + /* ring already cleared, nothing to do */ + if (!tx_ring->tx_buffer_info) + return; + + /* Free all the Tx ring sk_buffs */ + for (i = 0; i < tx_ring->count; i++) { + tx_buffer_info = &tx_ring->tx_buffer_info[i]; + txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer_info); + } + + netdev_tx_reset_queue(txring_txq(tx_ring)); + + size = sizeof(struct txgbe_tx_buffer) * tx_ring->count; + memset(tx_ring->tx_buffer_info, 0, size); + + /* Zero out the descriptor ring */ + memset(tx_ring->desc, 0, tx_ring->size); +} + +/** + * txgbe_clean_all_rx_rings - Free Rx Buffers for all queues + * @adapter: board private structure + **/ +static void txgbe_clean_all_rx_rings(struct txgbe_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_rx_queues; i++) + txgbe_clean_rx_ring(adapter->rx_ring[i]); +} + +/** + * txgbe_clean_all_tx_rings - Free Tx Buffers for all queues + * @adapter: board private structure + **/ +static void txgbe_clean_all_tx_rings(struct txgbe_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_tx_queues; i++) + txgbe_clean_tx_ring(adapter->tx_ring[i]); +} + +static void txgbe_fdir_filter_exit(struct txgbe_adapter *adapter) +{ + struct hlist_node *node; + struct txgbe_fdir_filter *filter; + + spin_lock(&adapter->fdir_perfect_lock); + + hlist_for_each_entry_safe(filter, node, + &adapter->fdir_filter_list, fdir_node) { + hlist_del(&filter->fdir_node); + kfree(filter); + } + adapter->fdir_filter_count = 0; + + spin_unlock(&adapter->fdir_perfect_lock); +} + +void txgbe_disable_device(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct txgbe_hw *hw = &adapter->hw; + + u32 i; + + /* signal that we are down to the interrupt handler */ + if (test_and_set_bit(__TXGBE_DOWN, &adapter->state)) + return; /* do nothing if already down */ + + txgbe_disable_pcie_master(hw); + /* disable receives */ + TCALL(hw, mac.ops.disable_rx); + + /* disable all enabled rx queues */ + for (i = 0; i < adapter->num_rx_queues; i++) + /* this call also flushes the previous write */ + txgbe_disable_rx_queue(adapter, adapter->rx_ring[i]); + + netif_tx_stop_all_queues(netdev); + + /* call carrier off first to avoid false dev_watchdog timeouts */ + netif_carrier_off(netdev); + netif_tx_disable(netdev); + + txgbe_irq_disable(adapter); + + txgbe_napi_disable_all(adapter); + + adapter->flags2 &= ~(TXGBE_FLAG2_FDIR_REQUIRES_REINIT | + TXGBE_FLAG2_PF_RESET_REQUESTED | + TXGBE_FLAG2_DEV_RESET_REQUESTED | + TXGBE_FLAG2_GLOBAL_RESET_REQUESTED); + adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE; + + del_timer_sync(&adapter->service_timer); + + if (adapter->num_vfs) { + /* Clear EITR Select mapping */ + wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0); + + /* Mark all the VFs as inactive */ + for (i = 0 ; i < adapter->num_vfs; i++) + adapter->vfinfo[i].clear_to_send = 0; + + /* ping all the active vfs to let them know we are going down */ + + /* Disable all VFTE/VFRE TX/RX */ + } + + if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) || + ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) { + /* disable mac transmiter */ + wr32m(hw, TXGBE_MAC_TX_CFG, + TXGBE_MAC_TX_CFG_TE, 0); + } + /* disable transmits in the hardware now that interrupts are off */ + for (i = 0; i < adapter->num_tx_queues; i++) { + u8 reg_idx = adapter->tx_ring[i]->reg_idx; + wr32(hw, TXGBE_PX_TR_CFG(reg_idx), + TXGBE_PX_TR_CFG_SWFLSH); + } + + /* Disable the Tx DMA engine */ + wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0); +} + + +void txgbe_down(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + txgbe_disable_device(adapter); + + txgbe_reset(adapter); + + if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))) + /* power down the optics for SFP+ fiber */ + TCALL(&adapter->hw, mac.ops.disable_tx_laser); + + txgbe_clean_all_tx_rings(adapter); + txgbe_clean_all_rx_rings(adapter); +} + +/** + * txgbe_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The txgbe_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 txgbe_init_shared_code(struct txgbe_hw *hw) +{ + s32 status; + + DEBUGFUNC("\n"); + + status = txgbe_init_ops(hw); + return status; +} + +/** + * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter) + * @adapter: board private structure to initialize + * + * txgbe_sw_init initializes the Adapter private data structure. + * Fields are initialized based on PCI device information and + * OS network device settings (MTU size). + **/ +static const u32 def_rss_key[10] = { + 0xE291D73D, 0x1805EC6C, 0x2A94B30D, + 0xA54F2BEC, 0xEA49AF7C, 0xE214AD3D, 0xB855AABE, + 0x6A3E67EA, 0x14364D17, 0x3BED200D +}; + +static int txgbe_sw_init(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct pci_dev *pdev = adapter->pdev; + int err; + unsigned int fdir; + + /* PCI config space info */ + hw->vendor_id = pdev->vendor; + hw->device_id = pdev->device; + pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id); + if (hw->revision_id == TXGBE_FAILED_READ_CFG_BYTE && + txgbe_check_cfg_remove(hw, pdev)) { + e_err(probe, "read of revision id failed\n"); + err = -ENODEV; + goto out; + } + hw->subsystem_vendor_id = pdev->subsystem_vendor; + hw->subsystem_device_id = pdev->subsystem_device; + + pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id); + if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) { + e_err(probe, "read of subsystem id failed\n"); + err = -ENODEV; + goto out; + } + + err = txgbe_init_shared_code(hw); + if (err) { + e_err(probe, "init_shared_code failed: %d\n", err); + goto out; + } + adapter->mac_table = kzalloc(sizeof(struct txgbe_mac_addr) * + hw->mac.num_rar_entries, + GFP_ATOMIC); + if (!adapter->mac_table) { + err = TXGBE_ERR_OUT_OF_MEM; + e_err(probe, "mac_table allocation failed: %d\n", err); + goto out; + } + + memcpy(adapter->rss_key, def_rss_key, sizeof(def_rss_key)); + + /* Set common capability flags and settings */ + adapter->flags2 |= TXGBE_FLAG2_RSC_CAPABLE; + fdir = min_t(int, TXGBE_MAX_FDIR_INDICES, num_online_cpus()); + adapter->ring_feature[RING_F_FDIR].limit = fdir; + adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE; + + /* Set MAC specific capability flags and exceptions */ + adapter->flags |= TXGBE_FLAGS_SP_INIT; + adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_CAPABLE; + hw->phy.smart_speed = txgbe_smart_speed_off; + adapter->flags2 |= TXGBE_FLAG2_EEE_CAPABLE; + + /* n-tuple support exists, always init our spinlock */ + spin_lock_init(&adapter->fdir_perfect_lock); + + TCALL(hw, mbx.ops.init_params); + + /* default flow control settings */ + hw->fc.requested_mode = txgbe_fc_full; + hw->fc.current_mode = txgbe_fc_full; /* init for ethtool output */ + + adapter->last_lfc_mode = hw->fc.current_mode; + hw->fc.pause_time = TXGBE_DEFAULT_FCPAUSE; + hw->fc.send_xon = true; + hw->fc.disable_fc_autoneg = false; + + /* set default ring sizes */ + adapter->tx_ring_count = TXGBE_DEFAULT_TXD; + adapter->rx_ring_count = TXGBE_DEFAULT_RXD; + + /* set default work limits */ + adapter->tx_work_limit = TXGBE_DEFAULT_TX_WORK; + adapter->rx_work_limit = TXGBE_DEFAULT_RX_WORK; + + adapter->tx_timeout_recovery_level = 0; + + /* PF holds first pool slot */ + adapter->num_vmdqs = 1; + set_bit(0, &adapter->fwd_bitmask); + set_bit(__TXGBE_DOWN, &adapter->state); +out: + return err; +} + +/** + * txgbe_setup_tx_resources - allocate Tx resources (Descriptors) + * @tx_ring: tx descriptor ring (for a specific queue) to setup + * + * Return 0 on success, negative on failure + **/ +int txgbe_setup_tx_resources(struct txgbe_ring *tx_ring) +{ + struct device *dev = tx_ring->dev; + int orig_node = dev_to_node(dev); + int numa_node = -1; + int size; + + size = sizeof(struct txgbe_tx_buffer) * tx_ring->count; + + if (tx_ring->q_vector) + numa_node = tx_ring->q_vector->numa_node; + + tx_ring->tx_buffer_info = vzalloc_node(size, numa_node); + if (!tx_ring->tx_buffer_info) + tx_ring->tx_buffer_info = vzalloc(size); + if (!tx_ring->tx_buffer_info) + goto err; + + /* round up to nearest 4K */ + tx_ring->size = tx_ring->count * sizeof(union txgbe_tx_desc); + tx_ring->size = ALIGN(tx_ring->size, 4096); + + set_dev_node(dev, numa_node); + tx_ring->desc = dma_alloc_coherent(dev, + tx_ring->size, + &tx_ring->dma, + GFP_KERNEL); + set_dev_node(dev, orig_node); + if (!tx_ring->desc) + tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size, + &tx_ring->dma, GFP_KERNEL); + if (!tx_ring->desc) + goto err; + + return 0; + +err: + vfree(tx_ring->tx_buffer_info); + tx_ring->tx_buffer_info = NULL; + dev_err(dev, "Unable to allocate memory for the Tx descriptor ring\n"); + return -ENOMEM; +} + +/** + * txgbe_setup_all_tx_resources - allocate all queues Tx resources + * @adapter: board private structure + * + * If this function returns with an error, then it's possible one or + * more of the rings is populated (while the rest are not). It is the + * callers duty to clean those orphaned rings. + * + * Return 0 on success, negative on failure + **/ +static int txgbe_setup_all_tx_resources(struct txgbe_adapter *adapter) +{ + int i, err = 0; + + for (i = 0; i < adapter->num_tx_queues; i++) { + err = txgbe_setup_tx_resources(adapter->tx_ring[i]); + if (!err) + continue; + + e_err(probe, "Allocation for Tx Queue %u failed\n", i); + goto err_setup_tx; + } + + return 0; +err_setup_tx: + /* rewind the index freeing the rings as we go */ + while (i--) + txgbe_free_tx_resources(adapter->tx_ring[i]); + return err; +} + +/** + * txgbe_setup_rx_resources - allocate Rx resources (Descriptors) + * @rx_ring: rx descriptor ring (for a specific queue) to setup + * + * Returns 0 on success, negative on failure + **/ +int txgbe_setup_rx_resources(struct txgbe_ring *rx_ring) +{ + struct device *dev = rx_ring->dev; + int orig_node = dev_to_node(dev); + int numa_node = -1; + int size; + + size = sizeof(struct txgbe_rx_buffer) * rx_ring->count; + + if (rx_ring->q_vector) + numa_node = rx_ring->q_vector->numa_node; + + rx_ring->rx_buffer_info = vzalloc_node(size, numa_node); + if (!rx_ring->rx_buffer_info) + rx_ring->rx_buffer_info = vzalloc(size); + if (!rx_ring->rx_buffer_info) + goto err; + + /* Round up to nearest 4K */ + rx_ring->size = rx_ring->count * sizeof(union txgbe_rx_desc); + rx_ring->size = ALIGN(rx_ring->size, 4096); + + set_dev_node(dev, numa_node); + rx_ring->desc = dma_alloc_coherent(dev, + rx_ring->size, + &rx_ring->dma, + GFP_KERNEL); + set_dev_node(dev, orig_node); + if (!rx_ring->desc) + rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size, + &rx_ring->dma, GFP_KERNEL); + if (!rx_ring->desc) + goto err; + + return 0; +err: + vfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info = NULL; + dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n"); + return -ENOMEM; +} + +/** + * txgbe_setup_all_rx_resources - allocate all queues Rx resources + * @adapter: board private structure + * + * If this function returns with an error, then it's possible one or + * more of the rings is populated (while the rest are not). It is the + * callers duty to clean those orphaned rings. + * + * Return 0 on success, negative on failure + **/ +static int txgbe_setup_all_rx_resources(struct txgbe_adapter *adapter) +{ + int i, err = 0; + + for (i = 0; i < adapter->num_rx_queues; i++) { + err = txgbe_setup_rx_resources(adapter->rx_ring[i]); + if (!err) { + continue; + } + + e_err(probe, "Allocation for Rx Queue %u failed\n", i); + goto err_setup_rx; + } + + return 0; +err_setup_rx: + /* rewind the index freeing the rings as we go */ + while (i--) + txgbe_free_rx_resources(adapter->rx_ring[i]); + return err; +} + +/** + * txgbe_setup_isb_resources - allocate interrupt status resources + * @adapter: board private structure + * + * Return 0 on success, negative on failure + **/ +static int txgbe_setup_isb_resources(struct txgbe_adapter *adapter) +{ + struct device *dev = pci_dev_to_dev(adapter->pdev); + + adapter->isb_mem = dma_alloc_coherent(dev, + sizeof(u32) * TXGBE_ISB_MAX, + &adapter->isb_dma, + GFP_KERNEL); + if (!adapter->isb_mem) + return -ENOMEM; + memset(adapter->isb_mem, 0, sizeof(u32) * TXGBE_ISB_MAX); + return 0; +} + +/** + * txgbe_free_isb_resources - allocate all queues Rx resources + * @adapter: board private structure + * + * Return 0 on success, negative on failure + **/ +static void txgbe_free_isb_resources(struct txgbe_adapter *adapter) +{ + struct device *dev = pci_dev_to_dev(adapter->pdev); + + dma_free_coherent(dev, sizeof(u32) * TXGBE_ISB_MAX, + adapter->isb_mem, adapter->isb_dma); + adapter->isb_mem = NULL; +} + +/** + * txgbe_free_tx_resources - Free Tx Resources per Queue + * @tx_ring: Tx descriptor ring for a specific queue + * + * Free all transmit software resources + **/ +void txgbe_free_tx_resources(struct txgbe_ring *tx_ring) +{ + txgbe_clean_tx_ring(tx_ring); + + vfree(tx_ring->tx_buffer_info); + tx_ring->tx_buffer_info = NULL; + + /* if not set, then don't free */ + if (!tx_ring->desc) + return; + + dma_free_coherent(tx_ring->dev, tx_ring->size, + tx_ring->desc, tx_ring->dma); + tx_ring->desc = NULL; +} + +/** + * txgbe_free_all_tx_resources - Free Tx Resources for All Queues + * @adapter: board private structure + * + * Free all transmit software resources + **/ +static void txgbe_free_all_tx_resources(struct txgbe_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_tx_queues; i++) + txgbe_free_tx_resources(adapter->tx_ring[i]); +} + +/** + * txgbe_free_rx_resources - Free Rx Resources + * @rx_ring: ring to clean the resources from + * + * Free all receive software resources + **/ +void txgbe_free_rx_resources(struct txgbe_ring *rx_ring) +{ + txgbe_clean_rx_ring(rx_ring); + + vfree(rx_ring->rx_buffer_info); + rx_ring->rx_buffer_info = NULL; + + /* if not set, then don't free */ + if (!rx_ring->desc) + return; + + dma_free_coherent(rx_ring->dev, rx_ring->size, + rx_ring->desc, rx_ring->dma); + + rx_ring->desc = NULL; +} + +/** + * txgbe_free_all_rx_resources - Free Rx Resources for All Queues + * @adapter: board private structure + * + * Free all receive software resources + **/ +static void txgbe_free_all_rx_resources(struct txgbe_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_rx_queues; i++) + txgbe_free_rx_resources(adapter->rx_ring[i]); +} + +/** + * txgbe_change_mtu - Change the Maximum Transfer Unit + * @netdev: network interface device structure + * @new_mtu: new value for maximum frame size + * + * Returns 0 on success, negative on failure + **/ +static int txgbe_change_mtu(struct net_device *netdev, int new_mtu) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + if ((new_mtu < 68) || (new_mtu > 9414)) + return -EINVAL; + + /* + * we cannot allow legacy VFs to enable their receive + * paths when MTU greater than 1500 is configured. So display a + * warning that legacy VFs will be disabled. + */ + if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) && + (new_mtu > ETH_DATA_LEN)) + e_warn(probe, "Setting MTU > 1500 will disable legacy VFs\n"); + + e_info(probe, "changing MTU from %d to %d\n", netdev->mtu, new_mtu); + + /* must set new MTU before calling down or up */ + netdev->mtu = new_mtu; + + if (netif_running(netdev)) + txgbe_reinit_locked(adapter); + + return 0; +} + +/** + * txgbe_open - Called when a network interface is made active + * @netdev: network interface device structure + * + * Returns 0 on success, negative value on failure + * + * The open entry point is called when a network interface is made + * active by the system (IFF_UP). At this point all resources needed + * for transmit and receive operations are allocated, the interrupt + * handler is registered with the OS, the watchdog timer is started, + * and the stack is notified that the interface is ready. + **/ +int txgbe_open(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + int err; + + /*special for backplane flow*/ + adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_DOWN; + + /* disallow open during test */ + if (test_bit(__TXGBE_TESTING, &adapter->state)) + return -EBUSY; + + netif_carrier_off(netdev); + + /* allocate transmit descriptors */ + err = txgbe_setup_all_tx_resources(adapter); + if (err) + goto err_setup_tx; + + /* allocate receive descriptors */ + err = txgbe_setup_all_rx_resources(adapter); + if (err) + goto err_setup_rx; + + err = txgbe_setup_isb_resources(adapter); + if (err) + goto err_req_isb; + + txgbe_configure(adapter); + + err = txgbe_request_irq(adapter); + if (err) + goto err_req_irq; + + /* Notify the stack of the actual queue counts. */ + err = netif_set_real_num_tx_queues(netdev, adapter->num_vmdqs > 1 + ? adapter->queues_per_pool + : adapter->num_tx_queues); + if (err) + goto err_set_queues; + + err = netif_set_real_num_rx_queues(netdev, adapter->num_vmdqs > 1 + ? adapter->queues_per_pool + : adapter->num_rx_queues); + if (err) + goto err_set_queues; + + txgbe_ptp_init(adapter); + + txgbe_up_complete(adapter); + + txgbe_clear_vxlan_port(adapter); + udp_tunnel_get_rx_info(netdev); + + return 0; + +err_set_queues: + txgbe_free_irq(adapter); +err_req_irq: + txgbe_free_isb_resources(adapter); +err_req_isb: + txgbe_free_all_rx_resources(adapter); + +err_setup_rx: + txgbe_free_all_tx_resources(adapter); +err_setup_tx: + txgbe_reset(adapter); + + return err; +} + +/** + * txgbe_close_suspend - actions necessary to both suspend and close flows + * @adapter: the private adapter struct + * + * This function should contain the necessary work common to both suspending + * and closing of the device. + */ +static void txgbe_close_suspend(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + + txgbe_ptp_suspend(adapter); + + txgbe_disable_device(adapter); + if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)) + TCALL(hw, mac.ops.disable_tx_laser); + txgbe_clean_all_tx_rings(adapter); + txgbe_clean_all_rx_rings(adapter); + + txgbe_free_irq(adapter); + + txgbe_free_isb_resources(adapter); + txgbe_free_all_rx_resources(adapter); + txgbe_free_all_tx_resources(adapter); +} + +/** + * txgbe_close - Disables a network interface + * @netdev: network interface device structure + * + * Returns 0, this is not allowed to fail + * + * The close entry point is called when an interface is de-activated + * by the OS. The hardware is still under the drivers control, but + * needs to be disabled. A global MAC reset is issued to stop the + * hardware, and all transmit and receive resources are freed. + **/ +int txgbe_close(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + + if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 || + hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) { + txgbe_bp_close_protect(adapter); + } + + txgbe_ptp_stop(adapter); + + txgbe_down(adapter); + txgbe_free_irq(adapter); + + txgbe_free_isb_resources(adapter); + txgbe_free_all_rx_resources(adapter); + txgbe_free_all_tx_resources(adapter); + + txgbe_fdir_filter_exit(adapter); + + txgbe_release_hw_control(adapter); + + return 0; +} + +#ifdef CONFIG_PM +static int txgbe_resume(struct pci_dev *pdev) +{ + struct txgbe_adapter *adapter; + struct net_device *netdev; + u32 err; + + adapter = pci_get_drvdata(pdev); + netdev = adapter->netdev; + adapter->hw.hw_addr = adapter->io_addr; + pci_set_power_state(pdev, PCI_D0); + pci_restore_state(pdev); + /* + * pci_restore_state clears dev->state_saved so call + * pci_save_state to restore it. + */ + pci_save_state(pdev); + + err = pci_enable_device_mem(pdev); + if (err) { + e_dev_err("Cannot enable PCI device from suspend\n"); + return err; + } + smp_mb__before_atomic(); + clear_bit(__TXGBE_DISABLED, &adapter->state); + pci_set_master(pdev); + + pci_wake_from_d3(pdev, false); + + txgbe_reset(adapter); + + rtnl_lock(); + + err = txgbe_init_interrupt_scheme(adapter); + if (!err && netif_running(netdev)) + err = txgbe_open(netdev); + + rtnl_unlock(); + + if (err) + return err; + + netif_device_attach(netdev); + + return 0; +} +#endif /* CONFIG_PM */ +/* + * __txgbe_shutdown is not used when power management + * is disabled on older kernels (<2.6.12). causes a compile + * warning/error, because it is defined and not used. + */ +static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake) +{ + struct txgbe_adapter *adapter = pci_get_drvdata(pdev); + struct net_device *netdev = adapter->netdev; + struct txgbe_hw *hw = &adapter->hw; + u32 wufc = adapter->wol; +#ifdef CONFIG_PM + int retval = 0; +#endif + + netif_device_detach(netdev); + + rtnl_lock(); + if (netif_running(netdev)) + txgbe_close_suspend(adapter); + rtnl_unlock(); + + txgbe_clear_interrupt_scheme(adapter); + +#ifdef CONFIG_PM + retval = pci_save_state(pdev); + if (retval) + return retval; +#endif + + /* this won't stop link of managebility or WoL is enabled */ + txgbe_stop_mac_link_on_d3(hw); + + if (wufc) { + txgbe_set_rx_mode(netdev); + txgbe_configure_rx(adapter); + /* enable the optics for SFP+ fiber as we can WoL */ + TCALL(hw, mac.ops.enable_tx_laser); + + /* turn on all-multi mode if wake on multicast is enabled */ + if (wufc & TXGBE_PSR_WKUP_CTL_MC) { + wr32m(hw, TXGBE_PSR_CTL, + TXGBE_PSR_CTL_MPE, TXGBE_PSR_CTL_MPE); + } + + pci_clear_master(adapter->pdev); + wr32(hw, TXGBE_PSR_WKUP_CTL, wufc); + } else { + wr32(hw, TXGBE_PSR_WKUP_CTL, 0); + } + + pci_wake_from_d3(pdev, !!wufc); + + *enable_wake = !!wufc; + txgbe_release_hw_control(adapter); + + if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state)) + pci_disable_device(pdev); + + return 0; +} + +#ifdef CONFIG_PM +static int txgbe_suspend(struct pci_dev *pdev, + pm_message_t __always_unused state) +{ + int retval; + bool wake; + + retval = __txgbe_shutdown(pdev, &wake); + if (retval) + return retval; + + if (wake) { + pci_prepare_to_sleep(pdev); + } else { + pci_wake_from_d3(pdev, false); + pci_set_power_state(pdev, PCI_D3hot); + } + + return 0; +} +#endif /* CONFIG_PM */ + +static void txgbe_shutdown(struct pci_dev *pdev) +{ + bool wake; + + __txgbe_shutdown(pdev, &wake); + + if (system_state == SYSTEM_POWER_OFF) { + pci_wake_from_d3(pdev, wake); + pci_set_power_state(pdev, PCI_D3hot); + } +} + +/** + * txgbe_get_stats64 - Get System Network Statistics + * @netdev: network interface device structure + * @stats: storage space for 64bit statistics + * + * Returns 64bit statistics, for use in the ndo_get_stats64 callback. This + * function replaces txgbe_get_stats for kernels which support it. + */ +static void txgbe_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + int i; + + rcu_read_lock(); + for (i = 0; i < adapter->num_rx_queues; i++) { + struct txgbe_ring *ring = READ_ONCE(adapter->rx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start = u64_stats_fetch_begin_irq(&ring->syncp); + packets = ring->stats.packets; + bytes = ring->stats.bytes; + } while (u64_stats_fetch_retry_irq(&ring->syncp, + start)); + stats->rx_packets += packets; + stats->rx_bytes += bytes; + } + } + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *ring = READ_ONCE(adapter->tx_ring[i]); + u64 bytes, packets; + unsigned int start; + + if (ring) { + do { + start = u64_stats_fetch_begin_irq(&ring->syncp); + packets = ring->stats.packets; + bytes = ring->stats.bytes; + } while (u64_stats_fetch_retry_irq(&ring->syncp, + start)); + stats->tx_packets += packets; + stats->tx_bytes += bytes; + } + } + rcu_read_unlock(); + /* following stats updated by txgbe_watchdog_task() */ + stats->multicast = netdev->stats.multicast; + stats->rx_errors = netdev->stats.rx_errors; + stats->rx_length_errors = netdev->stats.rx_length_errors; + stats->rx_crc_errors = netdev->stats.rx_crc_errors; + stats->rx_missed_errors = netdev->stats.rx_missed_errors; +} + +/** + * txgbe_update_stats - Update the board statistics counters. + * @adapter: board private structure + **/ +void txgbe_update_stats(struct txgbe_adapter *adapter) +{ + struct net_device_stats *net_stats = &adapter->netdev->stats; + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_hw_stats *hwstats = &adapter->stats; + u64 total_mpc = 0; + u32 i, missed_rx = 0, mpc, bprc, lxon, lxoff; + u64 non_eop_descs = 0, restart_queue = 0, tx_busy = 0; + u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0; + u64 bytes = 0, packets = 0, hw_csum_rx_error = 0; + u64 hw_csum_rx_good = 0; + + if (test_bit(__TXGBE_DOWN, &adapter->state) || + test_bit(__TXGBE_RESETTING, &adapter->state)) + return; + + if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) { + u64 rsc_count = 0; + u64 rsc_flush = 0; + for (i = 0; i < adapter->num_rx_queues; i++) { + rsc_count += adapter->rx_ring[i]->rx_stats.rsc_count; + rsc_flush += adapter->rx_ring[i]->rx_stats.rsc_flush; + } + adapter->rsc_total_count = rsc_count; + adapter->rsc_total_flush = rsc_flush; + } + + for (i = 0; i < adapter->num_rx_queues; i++) { + struct txgbe_ring *rx_ring = adapter->rx_ring[i]; + non_eop_descs += rx_ring->rx_stats.non_eop_descs; + alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed; + alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed; + hw_csum_rx_error += rx_ring->rx_stats.csum_err; + hw_csum_rx_good += rx_ring->rx_stats.csum_good_cnt; + bytes += rx_ring->stats.bytes; + packets += rx_ring->stats.packets; + + } + adapter->non_eop_descs = non_eop_descs; + adapter->alloc_rx_page_failed = alloc_rx_page_failed; + adapter->alloc_rx_buff_failed = alloc_rx_buff_failed; + adapter->hw_csum_rx_error = hw_csum_rx_error; + adapter->hw_csum_rx_good = hw_csum_rx_good; + net_stats->rx_bytes = bytes; + net_stats->rx_packets = packets; + + bytes = 0; + packets = 0; + /* gather some stats to the adapter struct that are per queue */ + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *tx_ring = adapter->tx_ring[i]; + restart_queue += tx_ring->tx_stats.restart_queue; + tx_busy += tx_ring->tx_stats.tx_busy; + bytes += tx_ring->stats.bytes; + packets += tx_ring->stats.packets; + } + adapter->restart_queue = restart_queue; + adapter->tx_busy = tx_busy; + net_stats->tx_bytes = bytes; + net_stats->tx_packets = packets; + + hwstats->crcerrs += rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW); + + /* 8 register reads */ + for (i = 0; i < 8; i++) { + /* for packet buffers not used, the register should read 0 */ + mpc = rd32(hw, TXGBE_RDB_MPCNT(i)); + missed_rx += mpc; + hwstats->mpc[i] += mpc; + total_mpc += hwstats->mpc[i]; + hwstats->pxontxc[i] += rd32(hw, TXGBE_RDB_PXONTXC(i)); + hwstats->pxofftxc[i] += + rd32(hw, TXGBE_RDB_PXOFFTXC(i)); + hwstats->pxonrxc[i] += rd32(hw, TXGBE_MAC_PXONRXC(i)); + } + + hwstats->gprc += rd32(hw, TXGBE_PX_GPRC); + + txgbe_update_xoff_received(adapter); + + hwstats->o2bgptc += rd32(hw, TXGBE_TDM_OS2BMC_CNT); + if (txgbe_check_mng_access(&adapter->hw)) { + hwstats->o2bspc += rd32(hw, TXGBE_MNG_OS2BMC_CNT); + hwstats->b2ospc += rd32(hw, TXGBE_MNG_BMC2OS_CNT); + } + hwstats->b2ogprc += rd32(hw, TXGBE_RDM_BMC2OS_CNT); + hwstats->gorc += rd32(hw, TXGBE_PX_GORC_LSB); + hwstats->gorc += (u64)rd32(hw, TXGBE_PX_GORC_MSB) << 32; + + hwstats->gotc += rd32(hw, TXGBE_PX_GOTC_LSB); + hwstats->gotc += (u64)rd32(hw, TXGBE_PX_GOTC_MSB) << 32; + + + adapter->hw_rx_no_dma_resources += + rd32(hw, TXGBE_RDM_DRP_PKT); + hwstats->lxonrxc += rd32(hw, TXGBE_MAC_LXONRXC); + + hwstats->fdirmatch += rd32(hw, TXGBE_RDB_FDIR_MATCH); + hwstats->fdirmiss += rd32(hw, TXGBE_RDB_FDIR_MISS); + + bprc = rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW); + hwstats->bprc += bprc; + hwstats->mprc = 0; + + for (i = 0; i < 128; i++) + hwstats->mprc += rd32(hw, TXGBE_PX_MPRC(i)); + + + hwstats->roc += rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD); + hwstats->rlec += rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW); + lxon = rd32(hw, TXGBE_RDB_LXONTXC); + hwstats->lxontxc += lxon; + lxoff = rd32(hw, TXGBE_RDB_LXOFFTXC); + hwstats->lxofftxc += lxoff; + + hwstats->gptc += rd32(hw, TXGBE_PX_GPTC); + hwstats->mptc += rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW); + hwstats->ruc += rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD); + hwstats->tpr += rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW); + hwstats->bptc += rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW); + /* Fill out the OS statistics structure */ + net_stats->multicast = hwstats->mprc; + + /* Rx Errors */ + net_stats->rx_errors = hwstats->crcerrs + + hwstats->rlec; + net_stats->rx_dropped = 0; + net_stats->rx_length_errors = hwstats->rlec; + net_stats->rx_crc_errors = hwstats->crcerrs; + net_stats->rx_missed_errors = total_mpc; + + /* + * VF Stats Collection - skip while resetting because these + * are not clear on read and otherwise you'll sometimes get + * crazy values. + */ + if (!test_bit(__TXGBE_RESETTING, &adapter->state)) { + for (i = 0; i < adapter->num_vfs; i++) { + UPDATE_VF_COUNTER_32bit(TXGBE_VX_GPRC(i), \ + adapter->vfinfo[i].last_vfstats.gprc, \ + adapter->vfinfo[i].vfstats.gprc); + UPDATE_VF_COUNTER_32bit(TXGBE_VX_GPTC(i), \ + adapter->vfinfo[i].last_vfstats.gptc, \ + adapter->vfinfo[i].vfstats.gptc); + UPDATE_VF_COUNTER_36bit(TXGBE_VX_GORC_LSB(i), \ + TXGBE_VX_GORC_MSB(i), \ + adapter->vfinfo[i].last_vfstats.gorc, \ + adapter->vfinfo[i].vfstats.gorc); + UPDATE_VF_COUNTER_36bit(TXGBE_VX_GOTC_LSB(i), \ + TXGBE_VX_GOTC_MSB(i), \ + adapter->vfinfo[i].last_vfstats.gotc, \ + adapter->vfinfo[i].vfstats.gotc); + UPDATE_VF_COUNTER_32bit(TXGBE_VX_MPRC(i), \ + adapter->vfinfo[i].last_vfstats.mprc, \ + adapter->vfinfo[i].vfstats.mprc); + } + } +} + +/** + * txgbe_fdir_reinit_subtask - worker thread to reinit FDIR filter table + * @adapter - pointer to the device adapter structure + **/ +static void txgbe_fdir_reinit_subtask(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + int i; + + if (!(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT)) + return; + + adapter->flags2 &= ~TXGBE_FLAG2_FDIR_REQUIRES_REINIT; + + /* if interface is down do nothing */ + if (test_bit(__TXGBE_DOWN, &adapter->state)) + return; + + /* do nothing if we are not using signature filters */ + if (!(adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE)) + return; + + adapter->fdir_overflow++; + + if (txgbe_reinit_fdir_tables(hw) == 0) { + for (i = 0; i < adapter->num_tx_queues; i++) + set_bit(__TXGBE_TX_FDIR_INIT_DONE, + &(adapter->tx_ring[i]->state)); + /* re-enable flow director interrupts */ + wr32m(hw, TXGBE_PX_MISC_IEN, + TXGBE_PX_MISC_IEN_FLOW_DIR, TXGBE_PX_MISC_IEN_FLOW_DIR); + } else { + e_err(probe, "failed to finish FDIR re-initialization, " + "ignored adding FDIR ATR filters\n"); + } +} + +/** + * txgbe_check_hang_subtask - check for hung queues and dropped interrupts + * @adapter - pointer to the device adapter structure + * + * This function serves two purposes. First it strobes the interrupt lines + * in order to make certain interrupts are occurring. Secondly it sets the + * bits needed to check for TX hangs. As a result we should immediately + * determine if a hang has occurred. + */ +static void txgbe_check_hang_subtask(struct txgbe_adapter *adapter) +{ + int i; + + /* If we're down or resetting, just bail */ + if (test_bit(__TXGBE_DOWN, &adapter->state) || + test_bit(__TXGBE_REMOVING, &adapter->state) || + test_bit(__TXGBE_RESETTING, &adapter->state)) + return; + + /* Force detection of hung controller */ + if (netif_carrier_ok(adapter->netdev)) { + for (i = 0; i < adapter->num_tx_queues; i++) + set_check_for_tx_hang(adapter->tx_ring[i]); + } + +} + +/** + * txgbe_watchdog_update_link - update the link status + * @adapter - pointer to the device adapter structure + * @link_speed - pointer to a u32 to store the link_speed + **/ +static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 link_speed = adapter->link_speed; + bool link_up = adapter->link_up; +// bool pfc_en = adapter->dcb_cfg.pfc_mode_enable; + u32 reg; + u32 i = 1; + + if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE)) + return; + + link_speed = TXGBE_LINK_SPEED_10GB_FULL; + link_up = true; + TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); + + if (link_up || time_after(jiffies, (adapter->link_check_timeout + + TXGBE_TRY_LINK_TIMEOUT))) { + adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE; + } + + for (i = 0; i < 3; i++) { + TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false); + msleep(1); + } + + if (link_up && !((adapter->flags & TXGBE_FLAG_DCB_ENABLED))) { + TCALL(hw, mac.ops.fc_enable); + txgbe_set_rx_drop_en(adapter); + } + + if (link_up) { + adapter->last_rx_ptp_check = jiffies; + + if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) + txgbe_ptp_start_cyclecounter(adapter); + + if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) { + wr32(hw, TXGBE_MAC_TX_CFG, + (rd32(hw, TXGBE_MAC_TX_CFG) & + ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE | + TXGBE_MAC_TX_CFG_SPEED_10G); + } else if (link_speed & (TXGBE_LINK_SPEED_1GB_FULL | + TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL)) { + wr32(hw, TXGBE_MAC_TX_CFG, + (rd32(hw, TXGBE_MAC_TX_CFG) & + ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE | + TXGBE_MAC_TX_CFG_SPEED_1G); + } + + /* Re configure MAC RX */ + reg = rd32(hw, TXGBE_MAC_RX_CFG); + wr32(hw, TXGBE_MAC_RX_CFG, reg); + wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR); + reg = rd32(hw, TXGBE_MAC_WDG_TIMEOUT); + wr32(hw, TXGBE_MAC_WDG_TIMEOUT, reg); + } + + adapter->link_up = link_up; + adapter->link_speed = link_speed; + if (hw->mac.ops.dmac_config && hw->mac.dmac_config.watchdog_timer) { + u8 num_tcs = netdev_get_num_tc(adapter->netdev); + + if (hw->mac.dmac_config.link_speed != link_speed || + hw->mac.dmac_config.num_tcs != num_tcs) { + hw->mac.dmac_config.link_speed = link_speed; + hw->mac.dmac_config.num_tcs = num_tcs; + TCALL(hw, mac.ops.dmac_config); + } + } +} + +static void txgbe_update_default_up(struct txgbe_adapter *adapter) +{ + u8 up = 0; + + adapter->default_up = up; +} + +/** + * txgbe_watchdog_link_is_up - update netif_carrier status and + * print link up message + * @adapter - pointer to the device adapter structure + **/ +static void txgbe_watchdog_link_is_up(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct txgbe_hw *hw = &adapter->hw; + u32 link_speed = adapter->link_speed; + bool flow_rx, flow_tx; + + /* only continue if link was previously down */ + if (netif_carrier_ok(netdev)) + return; + + adapter->flags2 &= ~TXGBE_FLAG2_SEARCH_FOR_SFP; + + /* flow_rx, flow_tx report link flow control status */ + flow_rx = (rd32(hw, TXGBE_MAC_RX_FLOW_CTRL) & 0x101) == 0x1; + flow_tx = !!(TXGBE_RDB_RFCC_RFCE_802_3X & + rd32(hw, TXGBE_RDB_RFCC)); + + e_info(drv, "NIC Link is Up %s, Flow Control: %s\n", + (link_speed == TXGBE_LINK_SPEED_10GB_FULL ? + "10 Gbps" : + (link_speed == TXGBE_LINK_SPEED_1GB_FULL ? + "1 Gbps" : + (link_speed == TXGBE_LINK_SPEED_100_FULL ? + "100 Mbps" : + (link_speed == TXGBE_LINK_SPEED_10_FULL ? + "10 Mbps" : + "unknown speed")))), + ((flow_rx && flow_tx) ? "RX/TX" : + (flow_rx ? "RX" : + (flow_tx ? "TX" : "None")))); + + netif_carrier_on(netdev); + netif_tx_wake_all_queues(netdev); + + /* update the default user priority for VFs */ + txgbe_update_default_up(adapter); + + /* ping all the active vfs to let them know link has changed */ +} + +/** + * txgbe_watchdog_link_is_down - update netif_carrier status and + * print link down message + * @adapter - pointer to the adapter structure + **/ +static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct txgbe_hw *hw = &adapter->hw; + adapter->link_up = false; + adapter->link_speed = 0; + + /* only continue if link was up previously */ + if (!netif_carrier_ok(netdev)) + return; + + if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 || + hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) { + txgbe_bp_down_event(adapter); + } + + if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) + txgbe_ptp_start_cyclecounter(adapter); + + e_info(drv, "NIC Link is Down\n"); + netif_carrier_off(netdev); + netif_tx_stop_all_queues(netdev); + + /* ping all the active vfs to let them know link has changed */ + +} + +static bool txgbe_ring_tx_pending(struct txgbe_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct txgbe_ring *tx_ring = adapter->tx_ring[i]; + + if (tx_ring->next_to_use != tx_ring->next_to_clean) + return true; + } + + return false; +} + +static bool txgbe_vf_tx_pending(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ]; + u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask); + + u32 i, j; + + if (!adapter->num_vfs) + return false; + + for (i = 0; i < adapter->num_vfs; i++) { + for (j = 0; j < q_per_pool; j++) { + u32 h, t; + + h = rd32(hw, + TXGBE_PX_TR_RPn(q_per_pool, i, j)); + t = rd32(hw, + TXGBE_PX_TR_WPn(q_per_pool, i, j)); + + if (h != t) + return true; + } + } + + return false; +} + +/** + * txgbe_watchdog_flush_tx - flush queues on link down + * @adapter - pointer to the device adapter structure + **/ +static void txgbe_watchdog_flush_tx(struct txgbe_adapter *adapter) +{ + if (!netif_carrier_ok(adapter->netdev)) { + if (txgbe_ring_tx_pending(adapter) || + txgbe_vf_tx_pending(adapter)) { + /* We've lost link, so the controller stops DMA, + * but we've got queued Tx work that's never going + * to get done, so reset controller to flush Tx. + * (Do the reset outside of interrupt context). + */ + e_warn(drv, "initiating reset due to lost link with " + "pending Tx work\n"); + adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED; + } + } +} + +/** + * txgbe_watchdog_subtask - check and bring link up + * @adapter - pointer to the device adapter structure + **/ +static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter) +{ + u32 value = 0; + struct txgbe_hw *hw = &adapter->hw; + + /* if interface is down do nothing */ + if (test_bit(__TXGBE_DOWN, &adapter->state) || + test_bit(__TXGBE_REMOVING, &adapter->state) || + test_bit(__TXGBE_RESETTING, &adapter->state)) + return; + + if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 || + hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) { + txgbe_bp_watchdog_event(adapter); + } + + if (BOND_CHECK_LINK_MODE == 1) { + value = rd32(hw, 0x14404); + value = value & 0x1; + if (value == 1) + adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; + } + if (!(adapter->flags2 & TXGBE_FLAG2_LINK_DOWN)) + txgbe_watchdog_update_link(adapter); + + if (adapter->link_up) + txgbe_watchdog_link_is_up(adapter); + else + txgbe_watchdog_link_is_down(adapter); + + txgbe_update_stats(adapter); + + txgbe_watchdog_flush_tx(adapter); +} + +/** + * txgbe_sfp_detection_subtask - poll for SFP+ cable + * @adapter - the txgbe adapter structure + **/ +static void txgbe_sfp_detection_subtask(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_mac_info *mac = &hw->mac; + s32 err; + + /* not searching for SFP so there is nothing to do here */ + if (!(adapter->flags2 & TXGBE_FLAG2_SEARCH_FOR_SFP) && + !(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET)) + return; + + if (adapter->sfp_poll_time && + time_after(adapter->sfp_poll_time, jiffies)) + return; /* If not yet time to poll for SFP */ + + /* someone else is in init, wait until next service event */ + if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state)) + return; + + adapter->sfp_poll_time = jiffies + TXGBE_SFP_POLL_JIFFIES - 1; + + err = TCALL(hw, phy.ops.identify_sfp); + if (err == TXGBE_ERR_SFP_NOT_SUPPORTED) + goto sfp_out; + + if (err == TXGBE_ERR_SFP_NOT_PRESENT) { + /* If no cable is present, then we need to reset + * the next time we find a good cable. */ + adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET; + } + + /* exit on error */ + if (err) + goto sfp_out; + + /* exit if reset not needed */ + if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET)) + goto sfp_out; + + adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET; + + if (hw->phy.multispeed_fiber) { + /* Set up dual speed SFP+ support */ + mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber; + mac->ops.setup_mac_link = txgbe_setup_mac_link; + mac->ops.set_rate_select_speed = + txgbe_set_hard_rate_select_speed; + } else { + mac->ops.setup_link = txgbe_setup_mac_link; + mac->ops.set_rate_select_speed = + txgbe_set_hard_rate_select_speed; + hw->phy.autoneg_advertised = 0; + } + + adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG; + e_info(probe, "detected SFP+: %d\n", hw->phy.sfp_type); + +sfp_out: + clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state); + + if ((err == TXGBE_ERR_SFP_NOT_SUPPORTED) && + adapter->netdev_registered) { + e_dev_err("failed to initialize because an unsupported " + "SFP+ module type was detected.\n"); + } +} + +/** + * txgbe_sfp_link_config_subtask - set up link SFP after module install + * @adapter - the txgbe adapter structure + **/ +static void txgbe_sfp_link_config_subtask(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 speed; + bool autoneg = false; + u16 value; + u8 device_type = hw->subsystem_id & 0xF0; + + if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG)) + return; + + /* someone else is in init, wait until next service event */ + if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state)) + return; + + adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG; + + if (device_type == TXGBE_ID_XAUI) { + /* clear ext phy int status */ + txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8011, &value); + if (value & 0x400) + adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; + if (!(value & 0x800)) { + clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state); + return; + } + } + + if (device_type == TXGBE_ID_MAC_XAUI || + (txgbe_get_media_type(hw) == txgbe_media_type_copper && + device_type == TXGBE_ID_SFI_XAUI)) { + speed = TXGBE_LINK_SPEED_10GB_FULL; + } else if (device_type == TXGBE_ID_MAC_SGMII) { + speed = TXGBE_LINK_SPEED_1GB_FULL; + } else { + speed = hw->phy.autoneg_advertised; + if ((!speed) && (hw->mac.ops.get_link_capabilities)) { + TCALL(hw, mac.ops.get_link_capabilities, &speed, &autoneg); + /* setup the highest link when no autoneg */ + if (!autoneg) { + if (speed & TXGBE_LINK_SPEED_10GB_FULL) + speed = TXGBE_LINK_SPEED_10GB_FULL; + } + } + } + + TCALL(hw, mac.ops.setup_link, speed, txgbe_is_sfp(hw)); + + adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE; + adapter->link_check_timeout = jiffies; + clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state); +} + +static void txgbe_sfp_reset_eth_phy_subtask(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 speed; + bool linkup = true; + u32 i = 0; + + if (!(adapter->flags2 & TXGBE_FLAG_NEED_ETH_PHY_RESET)) + return; + + adapter->flags2 &= ~TXGBE_FLAG_NEED_ETH_PHY_RESET; + + TCALL(hw, mac.ops.check_link, &speed, &linkup, false); + if (!linkup) { + txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, + 0xA000); + /* wait phy initialization done */ + for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) { + if ((txgbe_rd32_epcs(hw, + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) & + TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0) + break; + msleep(100); + } + } +} + +/** + * txgbe_service_timer - Timer Call-back + * @data: pointer to adapter cast into an unsigned long + **/ +static void txgbe_service_timer(struct timer_list *t) +{ + struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer); + unsigned long next_event_offset; + struct txgbe_hw *hw = &adapter->hw; + + /* poll faster when waiting for link */ + if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) { + if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) + next_event_offset = HZ; + else if (BOND_CHECK_LINK_MODE == 1) + next_event_offset = HZ / 100; + else + next_event_offset = HZ / 10; + } else + next_event_offset = HZ * 2; + + if ((rd32(&adapter->hw, TXGBE_MIS_PF_SM) == 1) && (hw->bus.lan_id)) { + adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER; + } + + /* Reset the timer */ + mod_timer(&adapter->service_timer, next_event_offset + jiffies); + + txgbe_service_event_schedule(adapter); +} + +static void txgbe_reset_subtask(struct txgbe_adapter *adapter) +{ + u32 reset_flag = 0; + u32 value = 0; + + if (!(adapter->flags2 & (TXGBE_FLAG2_PF_RESET_REQUESTED | + TXGBE_FLAG2_DEV_RESET_REQUESTED | + TXGBE_FLAG2_GLOBAL_RESET_REQUESTED | + TXGBE_FLAG2_RESET_INTR_RECEIVED))) + return; + + /* If we're already down, just bail */ + if (test_bit(__TXGBE_DOWN, &adapter->state) || + test_bit(__TXGBE_REMOVING, &adapter->state)) + return; + + netdev_err(adapter->netdev, "Reset adapter\n"); + adapter->tx_timeout_count++; + + rtnl_lock(); + if (adapter->flags2 & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) { + reset_flag |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED; + adapter->flags2 &= ~TXGBE_FLAG2_GLOBAL_RESET_REQUESTED; + } + if (adapter->flags2 & TXGBE_FLAG2_DEV_RESET_REQUESTED) { + reset_flag |= TXGBE_FLAG2_DEV_RESET_REQUESTED; + adapter->flags2 &= ~TXGBE_FLAG2_DEV_RESET_REQUESTED; + } + if (adapter->flags2 & TXGBE_FLAG2_PF_RESET_REQUESTED) { + reset_flag |= TXGBE_FLAG2_PF_RESET_REQUESTED; + adapter->flags2 &= ~TXGBE_FLAG2_PF_RESET_REQUESTED; + } + + if (adapter->flags2 & TXGBE_FLAG2_RESET_INTR_RECEIVED) { + /* If there's a recovery already waiting, it takes + * precedence before starting a new reset sequence. + */ + adapter->flags2 &= ~TXGBE_FLAG2_RESET_INTR_RECEIVED; + value = rd32m(&adapter->hw, TXGBE_MIS_RST_ST, + TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK) >> + TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT; + if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST) { + adapter->hw.reset_type = TXGBE_SW_RESET; + /* errata 7 */ + if (txgbe_mng_present(&adapter->hw) && + adapter->hw.revision_id == TXGBE_SP_MPW) + adapter->flags2 |= + TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED; + } else if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST) + adapter->hw.reset_type = TXGBE_GLOBAL_RESET; + adapter->hw.force_full_reset = true; + txgbe_reinit_locked(adapter); + adapter->hw.force_full_reset = false; + goto unlock; + } + + if (reset_flag & TXGBE_FLAG2_DEV_RESET_REQUESTED) { + /* Request a Device Reset + * + * This will start the chip's countdown to the actual full + * chip reset event, and a warning interrupt to be sent + * to all PFs, including the requestor. Our handler + * for the warning interrupt will deal with the shutdown + * and recovery of the switch setup. + */ + /*debug to up*/ + /*txgbe_dump(adapter);*/ + if (txgbe_mng_present(&adapter->hw)) { + txgbe_reset_hostif(&adapter->hw); + } else + wr32m(&adapter->hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_SW_RST, TXGBE_MIS_RST_SW_RST); + + } else if (reset_flag & TXGBE_FLAG2_PF_RESET_REQUESTED) { + /*debug to up*/ + txgbe_reinit_locked(adapter); + } else if (reset_flag & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) { + /* Request a Global Reset + * + * This will start the chip's countdown to the actual full + * chip reset event, and a warning interrupt to be sent + * to all PFs, including the requestor. Our handler + * for the warning interrupt will deal with the shutdown + * and recovery of the switch setup. + */ + /*debug to up*/ + pci_save_state(adapter->pdev); + if (txgbe_mng_present(&adapter->hw)) { + txgbe_reset_hostif(&adapter->hw); + } else + wr32m(&adapter->hw, TXGBE_MIS_RST, + TXGBE_MIS_RST_GLOBAL_RST, + TXGBE_MIS_RST_GLOBAL_RST); + + } + +unlock: + rtnl_unlock(); +} + +static void txgbe_check_pcie_subtask(struct txgbe_adapter *adapter) +{ + if (!(adapter->flags2 & TXGBE_FLAG2_PCIE_NEED_RECOVER)) + return; + + e_info(probe, "do recovery\n"); + wr32m(&adapter->hw, TXGBE_MIS_PF_SM, + TXGBE_MIS_PF_SM_SM, 0); + adapter->flags2 &= ~TXGBE_FLAG2_PCIE_NEED_RECOVER; +} + +/** + * txgbe_service_task - manages and runs subtasks + * @work: pointer to work_struct containing our data + **/ +static void txgbe_service_task(struct work_struct *work) +{ + struct txgbe_adapter *adapter = container_of(work, + struct txgbe_adapter, + service_task); + if (TXGBE_REMOVED(adapter->hw.hw_addr)) { + if (!test_bit(__TXGBE_DOWN, &adapter->state)) { + rtnl_lock(); + txgbe_down(adapter); + rtnl_unlock(); + } + txgbe_service_event_complete(adapter); + return; + } + + if (adapter->flags2 & TXGBE_FLAG2_VXLAN_REREG_NEEDED) { + adapter->flags2 &= ~TXGBE_FLAG2_VXLAN_REREG_NEEDED; + udp_tunnel_get_rx_info(adapter->netdev); + } + + txgbe_check_pcie_subtask(adapter); + txgbe_reset_subtask(adapter); + txgbe_sfp_detection_subtask(adapter); + txgbe_sfp_link_config_subtask(adapter); + txgbe_sfp_reset_eth_phy_subtask(adapter); + txgbe_check_overtemp_subtask(adapter); + txgbe_watchdog_subtask(adapter); + txgbe_fdir_reinit_subtask(adapter); + txgbe_check_hang_subtask(adapter); + if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) { + txgbe_ptp_overflow_check(adapter); + if (unlikely(adapter->flags & + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER)) + txgbe_ptp_rx_hang(adapter); + } + + txgbe_service_event_complete(adapter); +} + +static u8 get_ipv6_proto(struct sk_buff *skb, int offset) +{ + struct ipv6hdr *hdr = (struct ipv6hdr *)(skb->data + offset); + u8 nexthdr = hdr->nexthdr; + + offset += sizeof(struct ipv6hdr); + + while (ipv6_ext_hdr(nexthdr)) { + struct ipv6_opt_hdr _hdr, *hp; + + if (nexthdr == NEXTHDR_NONE) + break; + + hp = skb_header_pointer(skb, offset, sizeof(_hdr), &_hdr); + if (!hp) + break; + + if (nexthdr == NEXTHDR_FRAGMENT) { + break; + } else if (nexthdr == NEXTHDR_AUTH) { + offset += ipv6_authlen(hp); + } else { + offset += ipv6_optlen(hp); + } + + nexthdr = hp->nexthdr; + } + + return nexthdr; +} + +union network_header { + struct iphdr *ipv4; + struct ipv6hdr *ipv6; + void *raw; +}; + +static txgbe_dptype encode_tx_desc_ptype(const struct txgbe_tx_buffer *first) +{ + struct sk_buff *skb = first->skb; + + u8 tun_prot = 0; + + u8 l4_prot = 0; + u8 ptype = 0; + + if (skb->encapsulation) { + union network_header hdr; + + switch (first->protocol) { + case __constant_htons(ETH_P_IP): + tun_prot = ip_hdr(skb)->protocol; + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) + goto encap_frag; + ptype = TXGBE_PTYPE_TUN_IPV4; + break; + case __constant_htons(ETH_P_IPV6): + tun_prot = get_ipv6_proto(skb, skb_network_offset(skb)); + if (tun_prot == NEXTHDR_FRAGMENT) + goto encap_frag; + ptype = TXGBE_PTYPE_TUN_IPV6; + break; + default: + goto exit; + } + + if (tun_prot == IPPROTO_IPIP) { + hdr.raw = (void *)inner_ip_hdr(skb); + ptype |= TXGBE_PTYPE_PKT_IPIP; + } else if (tun_prot == IPPROTO_UDP) { + hdr.raw = (void *)inner_ip_hdr(skb); + /* fixme: VXLAN-GPE neither ETHER nor IP */ + + if (skb->inner_protocol_type != ENCAP_TYPE_ETHER || + skb->inner_protocol != htons(ETH_P_TEB)) { + ptype |= TXGBE_PTYPE_PKT_IG; + } else { + if (((struct ethhdr *) + skb_inner_mac_header(skb))->h_proto + == htons(ETH_P_8021Q)) { + ptype |= TXGBE_PTYPE_PKT_IGMV; + } else { + ptype |= TXGBE_PTYPE_PKT_IGM; + } + } + + } else if (tun_prot == IPPROTO_GRE) { + hdr.raw = (void *)inner_ip_hdr(skb); + if (skb->inner_protocol == htons(ETH_P_IP) || + skb->inner_protocol == htons(ETH_P_IPV6)) { + ptype |= TXGBE_PTYPE_PKT_IG; + } else { + if (((struct ethhdr *) + skb_inner_mac_header(skb))->h_proto + == htons(ETH_P_8021Q)) { + ptype |= TXGBE_PTYPE_PKT_IGMV; + } else { + ptype |= TXGBE_PTYPE_PKT_IGM; + } + } + } else { + goto exit; + } + + switch (hdr.ipv4->version) { + case IPVERSION: + l4_prot = hdr.ipv4->protocol; + if (hdr.ipv4->frag_off & htons(IP_MF | IP_OFFSET)) { + ptype |= TXGBE_PTYPE_TYP_IPFRAG; + goto exit; + } + break; + case 6: + l4_prot = get_ipv6_proto(skb, + skb_inner_network_offset(skb)); + ptype |= TXGBE_PTYPE_PKT_IPV6; + if (l4_prot == NEXTHDR_FRAGMENT) { + ptype |= TXGBE_PTYPE_TYP_IPFRAG; + goto exit; + } + break; + default: + goto exit; + } + } else { +encap_frag: + + switch (first->protocol) { + case __constant_htons(ETH_P_IP): + l4_prot = ip_hdr(skb)->protocol; + ptype = TXGBE_PTYPE_PKT_IP; + if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { + ptype |= TXGBE_PTYPE_TYP_IPFRAG; + goto exit; + } + break; +#ifdef NETIF_F_IPV6_CSUM + case __constant_htons(ETH_P_IPV6): + l4_prot = get_ipv6_proto(skb, skb_network_offset(skb)); + ptype = TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6; + if (l4_prot == NEXTHDR_FRAGMENT) { + ptype |= TXGBE_PTYPE_TYP_IPFRAG; + goto exit; + } + break; +#endif /* NETIF_F_IPV6_CSUM */ + case __constant_htons(ETH_P_1588): + ptype = TXGBE_PTYPE_L2_TS; + goto exit; + case __constant_htons(ETH_P_FIP): + ptype = TXGBE_PTYPE_L2_FIP; + goto exit; + case __constant_htons(TXGBE_ETH_P_LLDP): + ptype = TXGBE_PTYPE_L2_LLDP; + goto exit; + case __constant_htons(TXGBE_ETH_P_CNM): + ptype = TXGBE_PTYPE_L2_CNM; + goto exit; + case __constant_htons(ETH_P_PAE): + ptype = TXGBE_PTYPE_L2_EAPOL; + goto exit; + case __constant_htons(ETH_P_ARP): + ptype = TXGBE_PTYPE_L2_ARP; + goto exit; + default: + ptype = TXGBE_PTYPE_L2_MAC; + goto exit; + } + + } + + switch (l4_prot) { + case IPPROTO_TCP: + ptype |= TXGBE_PTYPE_TYP_TCP; + break; + case IPPROTO_UDP: + ptype |= TXGBE_PTYPE_TYP_UDP; + break; + case IPPROTO_SCTP: + ptype |= TXGBE_PTYPE_TYP_SCTP; + break; + default: + ptype |= TXGBE_PTYPE_TYP_IP; + break; + } + +exit: + return txgbe_decode_ptype(ptype); +} + +static int txgbe_tso(struct txgbe_ring *tx_ring, + struct txgbe_tx_buffer *first, + u8 *hdr_len, txgbe_dptype dptype) +{ + struct sk_buff *skb = first->skb; + u32 vlan_macip_lens, type_tucmd; + u32 mss_l4len_idx, l4len; + struct tcphdr *tcph; + struct iphdr *iph; + u32 tunhdr_eiplen_tunlen = 0; + + u8 tun_prot = 0; + bool enc = skb->encapsulation; + + struct ipv6hdr *ipv6h; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + if (!skb_is_gso(skb)) + return 0; + + if (skb_header_cloned(skb)) { + int err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC); + if (err) + return err; + } + + iph = enc ? inner_ip_hdr(skb) : ip_hdr(skb); + + if (iph->version == 4) { + + tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb); + + iph->tot_len = 0; + iph->check = 0; + tcph->check = ~csum_tcpudp_magic(iph->saddr, + iph->daddr, 0, + IPPROTO_TCP, + 0); + first->tx_flags |= TXGBE_TX_FLAGS_TSO | + TXGBE_TX_FLAGS_CSUM | + TXGBE_TX_FLAGS_IPV4 | + TXGBE_TX_FLAGS_CC; + + } else if (iph->version == 6 && skb_is_gso_v6(skb)) { + + ipv6h = enc ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); + tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb); + + ipv6h->payload_len = 0; + tcph->check = + ~csum_ipv6_magic(&ipv6h->saddr, + &ipv6h->daddr, + 0, IPPROTO_TCP, 0); + first->tx_flags |= TXGBE_TX_FLAGS_TSO | + TXGBE_TX_FLAGS_CSUM | + TXGBE_TX_FLAGS_CC; + } + + /* compute header lengths */ + + l4len = enc ? inner_tcp_hdrlen(skb) : tcp_hdrlen(skb); + *hdr_len = enc ? (skb_inner_transport_header(skb) - skb->data) + : skb_transport_offset(skb); + *hdr_len += l4len; + + /* update gso size and bytecount with header size */ + first->gso_segs = skb_shinfo(skb)->gso_segs; + first->bytecount += (first->gso_segs - 1) * *hdr_len; + + /* mss_l4len_id: use 0 as index for TSO */ + mss_l4len_idx = l4len << TXGBE_TXD_L4LEN_SHIFT; + mss_l4len_idx |= skb_shinfo(skb)->gso_size << TXGBE_TXD_MSS_SHIFT; + + /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */ + + if (enc) { + switch (first->protocol) { + case __constant_htons(ETH_P_IP): + tun_prot = ip_hdr(skb)->protocol; + first->tx_flags |= TXGBE_TX_FLAGS_OUTER_IPV4; + break; + case __constant_htons(ETH_P_IPV6): + tun_prot = ipv6_hdr(skb)->nexthdr; + break; + default: + break; + } + switch (tun_prot) { + case IPPROTO_UDP: + tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP; + tunhdr_eiplen_tunlen |= + ((skb_network_header_len(skb) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT) | + (((skb_inner_mac_header(skb) - + skb_transport_header(skb)) >> 1) << + TXGBE_TXD_TUNNEL_LEN_SHIFT); + break; + case IPPROTO_GRE: + tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE; + tunhdr_eiplen_tunlen |= + ((skb_network_header_len(skb) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT) | + (((skb_inner_mac_header(skb) - + skb_transport_header(skb)) >> 1) << + TXGBE_TXD_TUNNEL_LEN_SHIFT); + break; + case IPPROTO_IPIP: + tunhdr_eiplen_tunlen = (((char *)inner_ip_hdr(skb)- + (char *)ip_hdr(skb)) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT; + break; + default: + break; + } + + vlan_macip_lens = skb_inner_network_header_len(skb) >> 1; + } else + vlan_macip_lens = skb_network_header_len(skb) >> 1; + + vlan_macip_lens |= skb_network_offset(skb) << TXGBE_TXD_MACLEN_SHIFT; + vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK; + + type_tucmd = dptype.ptype << 24; + txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen, + type_tucmd, mss_l4len_idx); + + return 1; +} + +static void txgbe_tx_csum(struct txgbe_ring *tx_ring, + struct txgbe_tx_buffer *first, txgbe_dptype dptype) +{ + struct sk_buff *skb = first->skb; + u32 vlan_macip_lens = 0; + u32 mss_l4len_idx = 0; + u32 tunhdr_eiplen_tunlen = 0; + + u8 tun_prot = 0; + + u32 type_tucmd; + + if (skb->ip_summed != CHECKSUM_PARTIAL) { + if (!(first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN) && + !(first->tx_flags & TXGBE_TX_FLAGS_CC)) + return; + vlan_macip_lens = skb_network_offset(skb) << + TXGBE_TXD_MACLEN_SHIFT; + } else { + u8 l4_prot = 0; + + union { + struct iphdr *ipv4; + struct ipv6hdr *ipv6; + u8 *raw; + } network_hdr; + union { + struct tcphdr *tcphdr; + u8 *raw; + } transport_hdr; + + if (skb->encapsulation) { + network_hdr.raw = skb_inner_network_header(skb); + transport_hdr.raw = skb_inner_transport_header(skb); + vlan_macip_lens = skb_network_offset(skb) << + TXGBE_TXD_MACLEN_SHIFT; + switch (first->protocol) { + case __constant_htons(ETH_P_IP): + tun_prot = ip_hdr(skb)->protocol; + break; + case __constant_htons(ETH_P_IPV6): + tun_prot = ipv6_hdr(skb)->nexthdr; + break; + default: + if (unlikely(net_ratelimit())) { + dev_warn(tx_ring->dev, + "partial checksum but version=%d\n", + network_hdr.ipv4->version); + } + return; + } + switch (tun_prot) { + case IPPROTO_UDP: + tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP; + tunhdr_eiplen_tunlen |= + ((skb_network_header_len(skb) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT) | + (((skb_inner_mac_header(skb) - + skb_transport_header(skb)) >> 1) << + TXGBE_TXD_TUNNEL_LEN_SHIFT); + break; + case IPPROTO_GRE: + tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE; + tunhdr_eiplen_tunlen |= + ((skb_network_header_len(skb) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT) | + (((skb_inner_mac_header(skb) - + skb_transport_header(skb)) >> 1) << + TXGBE_TXD_TUNNEL_LEN_SHIFT); + break; + case IPPROTO_IPIP: + tunhdr_eiplen_tunlen = + (((char *)inner_ip_hdr(skb)- + (char *)ip_hdr(skb)) >> 2) << + TXGBE_TXD_OUTER_IPLEN_SHIFT; + break; + default: + break; + } + + } else { + network_hdr.raw = skb_network_header(skb); + transport_hdr.raw = skb_transport_header(skb); + vlan_macip_lens = skb_network_offset(skb) << + TXGBE_TXD_MACLEN_SHIFT; + } + + switch (network_hdr.ipv4->version) { + case IPVERSION: + vlan_macip_lens |= + (transport_hdr.raw - network_hdr.raw) >> 1; + l4_prot = network_hdr.ipv4->protocol; + break; + case 6: + vlan_macip_lens |= + (transport_hdr.raw - network_hdr.raw) >> 1; + l4_prot = network_hdr.ipv6->nexthdr; + break; + default: + break; + } + + switch (l4_prot) { + case IPPROTO_TCP: + + mss_l4len_idx = (transport_hdr.tcphdr->doff * 4) << + TXGBE_TXD_L4LEN_SHIFT; + break; + case IPPROTO_SCTP: + mss_l4len_idx = sizeof(struct sctphdr) << + TXGBE_TXD_L4LEN_SHIFT; + break; + case IPPROTO_UDP: + mss_l4len_idx = sizeof(struct udphdr) << + TXGBE_TXD_L4LEN_SHIFT; + break; + default: + break; + } + + /* update TX checksum flag */ + first->tx_flags |= TXGBE_TX_FLAGS_CSUM; + } + first->tx_flags |= TXGBE_TX_FLAGS_CC; + /* vlan_macip_lens: MACLEN, VLAN tag */ + vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK; + + type_tucmd = dptype.ptype << 24; + txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen, + type_tucmd, mss_l4len_idx); +} + +static u32 txgbe_tx_cmd_type(u32 tx_flags) +{ + /* set type for advanced descriptor with frame checksum insertion */ + u32 cmd_type = TXGBE_TXD_DTYP_DATA | + TXGBE_TXD_IFCS; + + /* set HW vlan bit if vlan is present */ + cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_HW_VLAN, + TXGBE_TXD_VLE); + + /* set segmentation enable bits for TSO/FSO */ + cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSO, + TXGBE_TXD_TSE); + + /* set timestamp bit if present */ + cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSTAMP, + TXGBE_TXD_MAC_TSTAMP); + + cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_LINKSEC, + TXGBE_TXD_LINKSEC); + + return cmd_type; +} + +static void txgbe_tx_olinfo_status(union txgbe_tx_desc *tx_desc, + u32 tx_flags, unsigned int paylen) +{ + u32 olinfo_status = paylen << TXGBE_TXD_PAYLEN_SHIFT; + + /* enable L4 checksum for TSO and TX checksum offload */ + olinfo_status |= TXGBE_SET_FLAG(tx_flags, + TXGBE_TX_FLAGS_CSUM, + TXGBE_TXD_L4CS); + + /* enble IPv4 checksum for TSO */ + olinfo_status |= TXGBE_SET_FLAG(tx_flags, + TXGBE_TX_FLAGS_IPV4, + TXGBE_TXD_IIPCS); + /* enable outer IPv4 checksum for TSO */ + olinfo_status |= TXGBE_SET_FLAG(tx_flags, + TXGBE_TX_FLAGS_OUTER_IPV4, + TXGBE_TXD_EIPCS); + /* + * Check Context must be set if Tx switch is enabled, which it + * always is for case where virtual functions are running + */ + olinfo_status |= TXGBE_SET_FLAG(tx_flags, + TXGBE_TX_FLAGS_CC, + TXGBE_TXD_CC); + + olinfo_status |= TXGBE_SET_FLAG(tx_flags, + TXGBE_TX_FLAGS_IPSEC, + TXGBE_TXD_IPSEC); + + tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status); +} + +static int __txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size) +{ + netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + + /* Herbert's original patch had: + * smp_mb__after_netif_stop_queue(); + * but since that doesn't exist yet, just open code it. + */ + smp_mb(); + + /* We need to check again in a case another CPU has just + * made room available. + */ + if (likely(txgbe_desc_unused(tx_ring) < size)) + return -EBUSY; + + /* A reprieve! - use start_queue because it doesn't call schedule */ + netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); + ++tx_ring->tx_stats.restart_queue; + return 0; +} + +static inline int txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size) +{ + if (likely(txgbe_desc_unused(tx_ring) >= size)) + return 0; + + return __txgbe_maybe_stop_tx(tx_ring, size); +} + +#define TXGBE_TXD_CMD (TXGBE_TXD_EOP | \ + TXGBE_TXD_RS) + +static int txgbe_tx_map(struct txgbe_ring *tx_ring, + struct txgbe_tx_buffer *first, + const u8 hdr_len) +{ + struct sk_buff *skb = first->skb; + struct txgbe_tx_buffer *tx_buffer; + union txgbe_tx_desc *tx_desc; + skb_frag_t *frag; + dma_addr_t dma; + unsigned int data_len, size; + u32 tx_flags = first->tx_flags; + u32 cmd_type = txgbe_tx_cmd_type(tx_flags); + u16 i = tx_ring->next_to_use; + + tx_desc = TXGBE_TX_DESC(tx_ring, i); + + txgbe_tx_olinfo_status(tx_desc, tx_flags, skb->len - hdr_len); + + size = skb_headlen(skb); + data_len = skb->data_len; + + dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE); + + tx_buffer = first; + + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + if (dma_mapping_error(tx_ring->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buffer, len, size); + dma_unmap_addr_set(tx_buffer, dma, dma); + + tx_desc->read.buffer_addr = cpu_to_le64(dma); + + while (unlikely(size > TXGBE_MAX_DATA_PER_TXD)) { + tx_desc->read.cmd_type_len = + cpu_to_le32(cmd_type ^ TXGBE_MAX_DATA_PER_TXD); + + i++; + tx_desc++; + if (i == tx_ring->count) { + tx_desc = TXGBE_TX_DESC(tx_ring, 0); + i = 0; + } + tx_desc->read.olinfo_status = 0; + + dma += TXGBE_MAX_DATA_PER_TXD; + size -= TXGBE_MAX_DATA_PER_TXD; + + tx_desc->read.buffer_addr = cpu_to_le64(dma); + } + + if (likely(!data_len)) + break; + + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type ^ size); + + i++; + tx_desc++; + if (i == tx_ring->count) { + tx_desc = TXGBE_TX_DESC(tx_ring, 0); + i = 0; + } + tx_desc->read.olinfo_status = 0; + + size = skb_frag_size(frag); + + data_len -= size; + + dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size, + DMA_TO_DEVICE); + + tx_buffer = &tx_ring->tx_buffer_info[i]; + } + + /* write last descriptor with RS and EOP bits */ + cmd_type |= size | TXGBE_TXD_CMD; + tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type); + + netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount); + + /* set the timestamp */ + first->time_stamp = jiffies; + + /* + * Force memory writes to complete before letting h/w know there + * are new descriptors to fetch. (Only applicable for weak-ordered + * memory model archs, such as IA-64). + * + * We also need this memory barrier to make certain all of the + * status bits have been updated before next_to_watch is written. + */ + wmb(); + + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch = tx_desc; + + i++; + if (i == tx_ring->count) + i = 0; + + tx_ring->next_to_use = i; + + txgbe_maybe_stop_tx(tx_ring, DESC_NEEDED); + + skb_tx_timestamp(skb); + + if (netif_xmit_stopped(txring_txq(tx_ring)) || !netdev_xmit_more()) { + writel(i, tx_ring->tail); + /* The following mmiowb() is required on certain + * architechtures (IA64/Altix in particular) in order to + * synchronize the I/O calls with respect to a spin lock. This + * is because the wmb() on those architectures does not + * guarantee anything for posted I/O writes. + * + * Note that the associated spin_unlock() is not within the + * driver code, but in the networking core stack. + */ + mmiowb(); + } + + return 0; +dma_error: + dev_err(tx_ring->dev, "TX DMA map failed\n"); + + /* clear dma mappings for failed tx_buffer_info map */ + for (;;) { + tx_buffer = &tx_ring->tx_buffer_info[i]; + if (dma_unmap_len(tx_buffer, len)) + dma_unmap_page(tx_ring->dev, + dma_unmap_addr(tx_buffer, dma), + dma_unmap_len(tx_buffer, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buffer, len, 0); + if (tx_buffer == first) + break; + if (i == 0) + i += tx_ring->count; + i--; + } + + dev_kfree_skb_any(first->skb); + first->skb = NULL; + + tx_ring->next_to_use = i; + + return -1; +} + +static void txgbe_atr(struct txgbe_ring *ring, + struct txgbe_tx_buffer *first, + txgbe_dptype dptype) +{ + struct txgbe_q_vector *q_vector = ring->q_vector; + union txgbe_atr_hash_dword input = { .dword = 0 }; + union txgbe_atr_hash_dword common = { .dword = 0 }; + union network_header hdr; + struct tcphdr *th; + + /* if ring doesn't have a interrupt vector, cannot perform ATR */ + if (!q_vector) + return; + + /* do nothing if sampling is disabled */ + if (!ring->atr_sample_rate) + return; + + ring->atr_count++; + + if (dptype.etype) { + if (TXGBE_PTYPE_TYP_TCP != TXGBE_PTYPE_TYPL4(dptype.ptype)) + return; + hdr.raw = (void *)skb_inner_network_header(first->skb); + th = inner_tcp_hdr(first->skb); + } else + + { + if (TXGBE_PTYPE_PKT_IP != TXGBE_PTYPE_PKT(dptype.ptype) || + TXGBE_PTYPE_TYP_TCP != TXGBE_PTYPE_TYPL4(dptype.ptype)) + return; + hdr.raw = (void *)skb_network_header(first->skb); + th = tcp_hdr(first->skb); + } + + /* skip this packet since it is invalid or the socket is closing */ + if (!th || th->fin) + return; + + /* sample on all syn packets or once every atr sample count */ + if (!th->syn && (ring->atr_count < ring->atr_sample_rate)) + return; + + /* reset sample count */ + ring->atr_count = 0; + + /* + * src and dst are inverted, think how the receiver sees them + * + * The input is broken into two sections, a non-compressed section + * containing vm_pool, vlan_id, and flow_type. The rest of the data + * is XORed together and stored in the compressed dword. + */ + input.formatted.vlan_id = htons((u16)dptype.ptype); + + /* + * since src port and flex bytes occupy the same word XOR them together + * and write the value to source port portion of compressed dword + */ + if (first->tx_flags & TXGBE_TX_FLAGS_SW_VLAN) + common.port.src ^= th->dest ^ first->skb->protocol; + else if (first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN) + common.port.src ^= th->dest ^ first->skb->vlan_proto; + else + common.port.src ^= th->dest ^ first->protocol; + common.port.dst ^= th->source; + + if (TXGBE_PTYPE_PKT_IPV6 & TXGBE_PTYPE_PKT(dptype.ptype)) { + input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV6; + common.ip ^= hdr.ipv6->saddr.s6_addr32[0] ^ + hdr.ipv6->saddr.s6_addr32[1] ^ + hdr.ipv6->saddr.s6_addr32[2] ^ + hdr.ipv6->saddr.s6_addr32[3] ^ + hdr.ipv6->daddr.s6_addr32[0] ^ + hdr.ipv6->daddr.s6_addr32[1] ^ + hdr.ipv6->daddr.s6_addr32[2] ^ + hdr.ipv6->daddr.s6_addr32[3]; + } else { + input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4; + common.ip ^= hdr.ipv4->saddr ^ hdr.ipv4->daddr; + } + + /* This assumes the Rx queue and Tx queue are bound to the same CPU */ + txgbe_fdir_add_signature_filter(&q_vector->adapter->hw, + input, common, ring->queue_index); +} + +/** + * skb_pad - zero pad the tail of an skb + * @skb: buffer to pad + * @pad: space to pad + * + * Ensure that a buffer is followed by a padding area that is zero + * filled. Used by network drivers which may DMA or transfer data + * beyond the buffer end onto the wire. + * + * May return error in out of memory cases. The skb is freed on error. + */ + +int txgbe_skb_pad_nonzero(struct sk_buff *skb, int pad) +{ + int err; + int ntail; + + /* If the skbuff is non linear tailroom is always zero.. */ + if (!skb_cloned(skb) && skb_tailroom(skb) >= pad) { + memset(skb->data+skb->len, 0x1, pad); + return 0; + } + + ntail = skb->data_len + pad - (skb->end - skb->tail); + if (likely(skb_cloned(skb) || ntail > 0)) { + err = pskb_expand_head(skb, 0, ntail, GFP_ATOMIC); + if (unlikely(err)) + goto free_skb; + } + + /* FIXME: The use of this function with non-linear skb's really needs + * to be audited. + */ + err = skb_linearize(skb); + if (unlikely(err)) + goto free_skb; + + memset(skb->data + skb->len, 0x1, pad); + return 0; + +free_skb: + kfree_skb(skb); + return err; +} + +netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb, + struct txgbe_adapter *adapter, + struct txgbe_ring *tx_ring) +{ + struct txgbe_tx_buffer *first; + int tso; + u32 tx_flags = 0; + unsigned short f; + u16 count = TXD_USE_COUNT(skb_headlen(skb)); + __be16 protocol = skb->protocol; + u8 hdr_len = 0; + txgbe_dptype dptype; + + /* work around hw errata 3 */ + u16 _llcLen, *llcLen; + llcLen = skb_header_pointer(skb, ETH_HLEN - 2, sizeof(u16), &_llcLen); + if (*llcLen == 0x3 || *llcLen == 0x4 || *llcLen == 0x5) { + if (txgbe_skb_pad_nonzero(skb, ETH_ZLEN - skb->len)) + return -ENOMEM; + __skb_put(skb, ETH_ZLEN - skb->len); + } + + /* + * need: 1 descriptor per page * PAGE_SIZE/TXGBE_MAX_DATA_PER_TXD, + * + 1 desc for skb_headlen/TXGBE_MAX_DATA_PER_TXD, + * + 2 desc gap to keep tail from touching head, + * + 1 desc for context descriptor, + * otherwise try next time + */ + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) + count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)-> + frags[f])); + + if (txgbe_maybe_stop_tx(tx_ring, count + 3)) { + tx_ring->tx_stats.tx_busy++; + return NETDEV_TX_BUSY; + } + + /* record the location of the first descriptor for this packet */ + first = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + first->skb = skb; + first->bytecount = skb->len; + first->gso_segs = 1; + + /* if we have a HW VLAN tag being added default to the HW one */ + if (skb_vlan_tag_present(skb)) { + tx_flags |= skb_vlan_tag_get(skb) << TXGBE_TX_FLAGS_VLAN_SHIFT; + tx_flags |= TXGBE_TX_FLAGS_HW_VLAN; + /* else if it is a SW VLAN check the next protocol and store the tag */ + } else if (protocol == htons(ETH_P_8021Q)) { + struct vlan_hdr *vhdr, _vhdr; + vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr); + if (!vhdr) + goto out_drop; + + protocol = vhdr->h_vlan_encapsulated_proto; + tx_flags |= ntohs(vhdr->h_vlan_TCI) << + TXGBE_TX_FLAGS_VLAN_SHIFT; + tx_flags |= TXGBE_TX_FLAGS_SW_VLAN; + } + + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && + adapter->ptp_clock) { + if (!test_and_set_bit_lock(__TXGBE_PTP_TX_IN_PROGRESS, + &adapter->state)) { + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + tx_flags |= TXGBE_TX_FLAGS_TSTAMP; + + /* schedule check for Tx timestamp */ + adapter->ptp_tx_skb = skb_get(skb); + adapter->ptp_tx_start = jiffies; + schedule_work(&adapter->ptp_tx_work); + } else { + adapter->tx_hwtstamp_skipped++; + } + } + + if ((adapter->flags & TXGBE_FLAG_DCB_ENABLED) && + ((tx_flags & (TXGBE_TX_FLAGS_HW_VLAN | TXGBE_TX_FLAGS_SW_VLAN)) || + (skb->priority != TC_PRIO_CONTROL))) { + tx_flags &= ~TXGBE_TX_FLAGS_VLAN_PRIO_MASK; + tx_flags |= skb->priority << + TXGBE_TX_FLAGS_VLAN_PRIO_SHIFT; + if (tx_flags & TXGBE_TX_FLAGS_SW_VLAN) { + struct vlan_ethhdr *vhdr; + if (skb_header_cloned(skb) && + pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) + goto out_drop; + vhdr = (struct vlan_ethhdr *)skb->data; + vhdr->h_vlan_TCI = htons(tx_flags >> + TXGBE_TX_FLAGS_VLAN_SHIFT); + } else { + tx_flags |= TXGBE_TX_FLAGS_HW_VLAN; + } + } + + /* record initial flags and protocol */ + first->tx_flags = tx_flags; + first->protocol = protocol; + + dptype = encode_tx_desc_ptype(first); + + tso = txgbe_tso(tx_ring, first, &hdr_len, dptype); + if (tso < 0) + goto out_drop; + else if (!tso) + txgbe_tx_csum(tx_ring, first, dptype); + + /* add the ATR filter if ATR is on */ + if (test_bit(__TXGBE_TX_FDIR_INIT_DONE, &tx_ring->state)) + txgbe_atr(tx_ring, first, dptype); + + if (txgbe_tx_map(tx_ring, first, hdr_len)) + goto cleanup_tx_tstamp; + + return NETDEV_TX_OK; + +out_drop: + dev_kfree_skb_any(first->skb); + first->skb = NULL; + +cleanup_tx_tstamp: + if (unlikely(tx_flags & TXGBE_TX_FLAGS_TSTAMP)) { + dev_kfree_skb_any(adapter->ptp_tx_skb); + adapter->ptp_tx_skb = NULL; + cancel_work_sync(&adapter->ptp_tx_work); + clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state); + } + + return NETDEV_TX_OK; +} + +static netdev_tx_t txgbe_xmit_frame(struct sk_buff *skb, + struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_ring *tx_ring; + unsigned int r_idx = skb->queue_mapping; + + if (!netif_carrier_ok(netdev)) { + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + /* + * The minimum packet size for olinfo paylen is 17 so pad the skb + * in order to meet this minimum size requirement. + */ + if (skb_put_padto(skb, 17)) + return NETDEV_TX_OK; + + if (r_idx >= adapter->num_tx_queues) + r_idx = r_idx % adapter->num_tx_queues; + tx_ring = adapter->tx_ring[r_idx]; + + return txgbe_xmit_frame_ring(skb, adapter, tx_ring); +} + +/** + * txgbe_set_mac - Change the Ethernet Address of the NIC + * @netdev: network interface device structure + * @p: pointer to an address structure + * + * Returns 0 on success, negative on failure + **/ +static int txgbe_set_mac(struct net_device *netdev, void *p) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + struct sockaddr *addr = p; + + if (!is_valid_ether_addr(addr->sa_data)) + return -EADDRNOTAVAIL; + + txgbe_del_mac_filter(adapter, hw->mac.addr, VMDQ_P(0)); + memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); + memcpy(hw->mac.addr, addr->sa_data, netdev->addr_len); + + txgbe_mac_set_default_filter(adapter, hw->mac.addr); + + return 0; +} + +/** + * txgbe_add_sanmac_netdev - Add the SAN MAC address to the corresponding + * netdev->dev_addr_list + * @netdev: network interface device structure + * + * Returns non-zero on failure + **/ +static int txgbe_add_sanmac_netdev(struct net_device *dev) +{ + int err = 0; + struct txgbe_adapter *adapter = netdev_priv(dev); + struct txgbe_hw *hw = &adapter->hw; + + if (is_valid_ether_addr(hw->mac.san_addr)) { + rtnl_lock(); + err = dev_addr_add(dev, hw->mac.san_addr, + NETDEV_HW_ADDR_T_SAN); + rtnl_unlock(); + + /* update SAN MAC vmdq pool selection */ + TCALL(hw, mac.ops.set_vmdq_san_mac, VMDQ_P(0)); + } + return err; +} + +/** + * txgbe_del_sanmac_netdev - Removes the SAN MAC address to the corresponding + * netdev->dev_addr_list + * @netdev: network interface device structure + * + * Returns non-zero on failure + **/ +static int txgbe_del_sanmac_netdev(struct net_device *dev) +{ + int err = 0; + struct txgbe_adapter *adapter = netdev_priv(dev); + struct txgbe_mac_info *mac = &adapter->hw.mac; + + if (is_valid_ether_addr(mac->san_addr)) { + rtnl_lock(); + err = dev_addr_del(dev, mac->san_addr, NETDEV_HW_ADDR_T_SAN); + rtnl_unlock(); + } + return err; +} + +static int txgbe_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, + int cmd) +{ + struct mii_ioctl_data *mii = (struct mii_ioctl_data *) &ifr->ifr_data; + int prtad, devad, ret; + struct txgbe_adapter *adapter = netdev_priv(netdev); + struct txgbe_hw *hw = &adapter->hw; + u16 value = 0; + + prtad = (mii->phy_id & MDIO_PHY_ID_PRTAD) >> 5; + devad = (mii->phy_id & MDIO_PHY_ID_DEVAD); + + if (cmd == SIOCGMIIREG) { + ret = txgbe_read_mdio(&hw->phy_dev, prtad, devad, mii->reg_num, + &value); + if (ret < 0) + return ret; + mii->val_out = value; + return 0; + } else { + return txgbe_write_mdio(&hw->phy_dev, prtad, devad, + mii->reg_num, mii->val_in); + } +} + +static int txgbe_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) +{ + + struct txgbe_adapter *adapter = netdev_priv(netdev); + + switch (cmd) { + case SIOCGHWTSTAMP: + return txgbe_ptp_get_ts_config(adapter, ifr); + case SIOCSHWTSTAMP: + return txgbe_ptp_set_ts_config(adapter, ifr); + case SIOCGMIIREG: + case SIOCSMIIREG: + return txgbe_mii_ioctl(netdev, ifr, cmd); + default: + return -EOPNOTSUPP; + } +} + +/* txgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid. + * @adapter: pointer to txgbe_adapter + * @tc: number of traffic classes currently enabled + * + * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm + * 802.1Q priority maps to a packet buffer that exists. + */ +static void txgbe_validate_rtr(struct txgbe_adapter *adapter, u8 tc) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 reg, rsave; + + reg = rd32(hw, TXGBE_RDB_UP2TC); + rsave = reg; + if (reg != rsave) + wr32(hw, TXGBE_RDB_UP2TC, reg); + + return; +} + +/** + * txgbe_set_prio_tc_map - Configure netdev prio tc map + * @adapter: Pointer to adapter struct + * + * Populate the netdev user priority to tc map + */ +static void txgbe_set_prio_tc_map(struct txgbe_adapter __maybe_unused *adapter) +{ + UNREFERENCED_PARAMETER(adapter); +} + +/** + * txgbe_setup_tc - routine to configure net_device for multiple traffic + * classes. + * + * @netdev: net device to configure + * @tc: number of traffic classes to enable + */ +int txgbe_setup_tc(struct net_device *dev, u8 tc) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + + if (tc && adapter->num_vmdqs > TXGBE_MAX_DCBMACVLANS) + return -EBUSY; + + /* Hardware has to reinitialize queues and interrupts to + * match packet buffer alignment. Unfortunately, the + * hardware is not flexible enough to do this dynamically. + */ + if (netif_running(dev)) + txgbe_close(dev); + else + txgbe_reset(adapter); + + txgbe_clear_interrupt_scheme(adapter); + + if (tc) { + netdev_set_num_tc(dev, tc); + txgbe_set_prio_tc_map(adapter); + } else { + netdev_reset_tc(dev); + } + + txgbe_validate_rtr(adapter, tc); + + txgbe_init_interrupt_scheme(adapter); + if (netif_running(dev)) + txgbe_open(dev); + + return 0; +} + +static int txgbe_setup_tc_mqprio(struct net_device *dev, + struct tc_mqprio_qopt *mqprio) +{ + mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS; + return txgbe_setup_tc(dev, mqprio->num_tc); +} + +static int __txgbe_setup_tc(struct net_device *dev, enum tc_setup_type type, + void *type_data) +{ + switch (type) { + case TC_SETUP_QDISC_MQPRIO: + return txgbe_setup_tc_mqprio(dev, type_data); + default: + return -EOPNOTSUPP; + } +} + +void txgbe_do_reset(struct net_device *netdev) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + if (netif_running(netdev)) + txgbe_reinit_locked(adapter); + else + txgbe_reset(adapter); +} + +static netdev_features_t txgbe_fix_features(struct net_device *netdev, + netdev_features_t features) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + + /* If Rx checksum is disabled, then RSC/LRO should also be disabled */ + if (!(features & NETIF_F_RXCSUM)) + features &= ~NETIF_F_LRO; + + /* Turn off LRO if not RSC capable */ + if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE)) + features &= ~NETIF_F_LRO; + + return features; +} + +static int txgbe_set_features(struct net_device *netdev, + netdev_features_t features) +{ + struct txgbe_adapter *adapter = netdev_priv(netdev); + bool need_reset = false; + + /* Make sure RSC matches LRO, reset if change */ + if (!(features & NETIF_F_LRO)) { + if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) + need_reset = true; + adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED; + } else if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) && + !(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) { + if (adapter->rx_itr_setting == 1 || + adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) { + adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED; + need_reset = true; + } else if ((netdev->features ^ features) & NETIF_F_LRO) { + + e_info(probe, "rx-usecs set too low, " + "disabling RSC\n"); + } + } + + /* + * Check if Flow Director n-tuple support was enabled or disabled. If + * the state changed, we need to reset. + */ + switch (features & NETIF_F_NTUPLE) { + case NETIF_F_NTUPLE: + /* turn off ATR, enable perfect filters and reset */ + if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)) + need_reset = true; + + adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE; + adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE; + break; + default: + /* turn off perfect filters, enable ATR and reset */ + if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE) + need_reset = true; + + adapter->flags &= ~TXGBE_FLAG_FDIR_PERFECT_CAPABLE; + + /* We cannot enable ATR if VMDq is enabled */ + if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) + break; + + /* We cannot enable ATR if we have 2 or more traffic classes */ + if (netdev_get_num_tc(netdev) > 1) + break; + + /* We cannot enable ATR if RSS is disabled */ + if (adapter->ring_feature[RING_F_RSS].limit <= 1) + break; + + /* A sample rate of 0 indicates ATR disabled */ + if (!adapter->atr_sample_rate) + break; + + adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE; + break; + } + + if (features & NETIF_F_HW_VLAN_CTAG_RX) + txgbe_vlan_strip_enable(adapter); + else + txgbe_vlan_strip_disable(adapter); + + if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE && + features & NETIF_F_RXCSUM) { + if (!need_reset) + adapter->flags2 |= TXGBE_FLAG2_VXLAN_REREG_NEEDED; + } else { + txgbe_clear_vxlan_port(adapter); + } + + if (features & NETIF_F_RXHASH) { + if (!(adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)) { + wr32m(&adapter->hw, TXGBE_RDB_RA_CTL, + TXGBE_RDB_RA_CTL_RSS_EN, TXGBE_RDB_RA_CTL_RSS_EN); + adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED; + } + } else { + if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) { + wr32m(&adapter->hw, TXGBE_RDB_RA_CTL, + TXGBE_RDB_RA_CTL_RSS_EN, ~TXGBE_RDB_RA_CTL_RSS_EN); + adapter->flags2 &= ~TXGBE_FLAG2_RSS_ENABLED; + } + } + + if (need_reset) + txgbe_do_reset(netdev); + + return 0; +} + +/** + * txgbe_add_udp_tunnel_port - Get notifications about adding UDP tunnel ports + * @dev: The port's netdev + * @ti: Tunnel endpoint information + **/ +static void txgbe_add_udp_tunnel_port(struct net_device *dev, + struct udp_tunnel_info *ti) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + struct txgbe_hw *hw = &adapter->hw; + __be16 port = ti->port; + + if (ti->sa_family != AF_INET) + return; + + switch (ti->type) { + case UDP_TUNNEL_TYPE_VXLAN: + if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) + return; + + if (adapter->vxlan_port == port) + return; + + if (adapter->vxlan_port) { + netdev_info(dev, + "VXLAN port %d set, not adding port %d\n", + ntohs(adapter->vxlan_port), + ntohs(port)); + return; + } + + adapter->vxlan_port = port; + wr32(hw, TXGBE_CFG_VXLAN, port); + break; + case UDP_TUNNEL_TYPE_GENEVE: + if (adapter->geneve_port == port) + return; + + if (adapter->geneve_port) { + netdev_info(dev, + "GENEVE port %d set, not adding port %d\n", + ntohs(adapter->geneve_port), + ntohs(port)); + return; + } + + adapter->geneve_port = port; + wr32(hw, TXGBE_CFG_GENEVE, port); + break; + default: + return; + } +} + +/** + * ixgbe_del_udp_tunnel_port - Get notifications about removing UDP tunnel ports + * @dev: The port's netdev + * @ti: Tunnel endpoint information + **/ +static void txgbe_del_udp_tunnel_port(struct net_device *dev, + struct udp_tunnel_info *ti) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + + if (ti->type != UDP_TUNNEL_TYPE_VXLAN && + ti->type != UDP_TUNNEL_TYPE_GENEVE) + return; + + if (ti->sa_family != AF_INET) + return; + + switch (ti->type) { + case UDP_TUNNEL_TYPE_VXLAN: + if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) + return; + + if (adapter->vxlan_port != ti->port) { + netdev_info(dev, "VXLAN port %d not found\n", + ntohs(ti->port)); + return; + } + + txgbe_clear_vxlan_port(adapter); + adapter->flags2 |= TXGBE_FLAG2_VXLAN_REREG_NEEDED; + break; + case UDP_TUNNEL_TYPE_GENEVE: + if (adapter->geneve_port != ti->port) { + netdev_info(dev, "GENEVE port %d not found\n", + ntohs(ti->port)); + return; + } + + adapter->geneve_port = 0; + wr32(&adapter->hw, TXGBE_CFG_GENEVE, 0); + break; + default: + return; + } +} + +static int txgbe_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + struct net_device *dev, + const unsigned char *addr, + u16 vid, + u16 flags) +{ + /* guarantee we can provide a unique filter for the unicast address */ + if (is_unicast_ether_addr(addr) || is_link_local_ether_addr(addr)) { + if (TXGBE_MAX_PF_MACVLANS <= netdev_uc_count(dev)) + return -ENOMEM; + } + + return ndo_dflt_fdb_add(ndm, tb, dev, addr, vid, flags); +} + +static int txgbe_ndo_bridge_setlink(struct net_device *dev, + struct nlmsghdr *nlh, + __always_unused u16 flags) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + struct nlattr *attr, *br_spec; + int rem; + + if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)) + return -EOPNOTSUPP; + + br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); + + nla_for_each_nested(attr, br_spec, rem) { + __u16 mode; + + if (nla_type(attr) != IFLA_BRIDGE_MODE) + continue; + + mode = nla_get_u16(attr); + if (mode == BRIDGE_MODE_VEPA) { + adapter->flags |= TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE; + } else if (mode == BRIDGE_MODE_VEB) { + adapter->flags &= ~TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE; + } else { + return -EINVAL; + } + + adapter->bridge_mode = mode; + + /* re-configure settings related to bridge mode */ + txgbe_configure_bridge_mode(adapter); + + e_info(drv, "enabling bridge mode: %s\n", + mode == BRIDGE_MODE_VEPA ? "VEPA" : "VEB"); + } + + return 0; +} + +static int txgbe_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, + struct net_device *dev, + u32 __maybe_unused filter_mask, + int nlflags) +{ + struct txgbe_adapter *adapter = netdev_priv(dev); + u16 mode; + + if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)) + return 0; + + mode = adapter->bridge_mode; + return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 0, 0, nlflags, + filter_mask, NULL); +} + +#define TXGBE_MAX_TUNNEL_HDR_LEN 80 +static netdev_features_t +txgbe_features_check(struct sk_buff *skb, struct net_device *dev, + netdev_features_t features) +{ + u32 vlan_num = 0; + u16 vlan_depth = skb->mac_len; + __be16 type = skb->protocol; + struct vlan_hdr *vh; + + if (skb_vlan_tag_present(skb)) { + vlan_num++; + } + + if (vlan_depth) { + vlan_depth -= VLAN_HLEN; + } else { + vlan_depth = ETH_HLEN; + } + + while (type == htons(ETH_P_8021Q) || type == htons(ETH_P_8021AD)) { + vlan_num++; + vh = (struct vlan_hdr *)(skb->data + vlan_depth); + type = vh->h_vlan_encapsulated_proto; + vlan_depth += VLAN_HLEN; + + } + + if (vlan_num > 2) + features &= ~(NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX); + + if (skb->encapsulation) { + if (unlikely(skb_inner_mac_header(skb) - + skb_transport_header(skb) > + TXGBE_MAX_TUNNEL_HDR_LEN)) + return features & ~NETIF_F_CSUM_MASK; + } + return features; +} + +static const struct net_device_ops txgbe_netdev_ops = { + .ndo_open = txgbe_open, + .ndo_stop = txgbe_close, + .ndo_start_xmit = txgbe_xmit_frame, + .ndo_set_rx_mode = txgbe_set_rx_mode, + .ndo_validate_addr = eth_validate_addr, + .ndo_set_mac_address = txgbe_set_mac, + .ndo_change_mtu = txgbe_change_mtu, + .ndo_tx_timeout = txgbe_tx_timeout, + .ndo_vlan_rx_add_vid = txgbe_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = txgbe_vlan_rx_kill_vid, + .ndo_do_ioctl = txgbe_ioctl, + .ndo_get_stats64 = txgbe_get_stats64, + .ndo_setup_tc = __txgbe_setup_tc, + .ndo_fdb_add = txgbe_ndo_fdb_add, + .ndo_bridge_setlink = txgbe_ndo_bridge_setlink, + .ndo_bridge_getlink = txgbe_ndo_bridge_getlink, + .ndo_udp_tunnel_add = txgbe_add_udp_tunnel_port, + .ndo_udp_tunnel_del = txgbe_del_udp_tunnel_port, + .ndo_features_check = txgbe_features_check, + .ndo_set_features = txgbe_set_features, + .ndo_fix_features = txgbe_fix_features, +}; + +void txgbe_assign_netdev_ops(struct net_device *dev) +{ + dev->netdev_ops = &txgbe_netdev_ops; + txgbe_set_ethtool_ops(dev); + dev->watchdog_timeo = 5 * HZ; +} + +/** + * txgbe_wol_supported - Check whether device supports WoL + * @adapter: the adapter private structure + * @device_id: the device ID + * @subdev_id: the subsystem device ID + * + * This function is used by probe and ethtool to determine + * which devices have WoL support + * + **/ +int txgbe_wol_supported(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + u16 wol_cap = adapter->eeprom_cap & TXGBE_DEVICE_CAPS_WOL_MASK; + + /* check eeprom to see if WOL is enabled */ + if ((wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0_1) || + ((wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0) && + (hw->bus.func == 0))) + return true; + else + return false; +} + +/** + * txgbe_probe - Device Initialization Routine + * @pdev: PCI device information struct + * @ent: entry in txgbe_pci_tbl + * + * Returns 0 on success, negative on failure + * + * txgbe_probe initializes an adapter identified by a pci_dev structure. + * The OS initialization, configuring of the adapter private structure, + * and a hardware reset occur. + **/ +static int txgbe_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent) +{ + struct net_device *netdev; + struct txgbe_adapter *adapter = NULL; + struct txgbe_hw *hw = NULL; + static int cards_found; + int err, pci_using_dac, expected_gts; + u16 offset = 0; + u16 eeprom_verh = 0, eeprom_verl = 0; + u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0; + u32 etrack_id = 0; + u16 build = 0, major = 0, patch = 0; + char *info_string, *i_s_var; + u8 part_str[TXGBE_PBANUM_LENGTH]; + unsigned int indices = MAX_TX_QUEUES; + + bool disable_dev = false; +/* #ifndef NETIF_F_GSO_PARTIA */ + netdev_features_t hw_features; + + err = pci_enable_device_mem(pdev); + if (err) + return err; + + if (!dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64)) && + !dma_set_coherent_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64))) { + pci_using_dac = 1; + } else { + err = dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(32)); + if (err) { + err = dma_set_coherent_mask(pci_dev_to_dev(pdev), + DMA_BIT_MASK(32)); + if (err) { + dev_err(pci_dev_to_dev(pdev), "No usable DMA " + "configuration, aborting\n"); + goto err_dma; + } + } + pci_using_dac = 0; + } + + err = pci_request_selected_regions(pdev, pci_select_bars(pdev, + IORESOURCE_MEM), txgbe_driver_name); + if (err) { + dev_err(pci_dev_to_dev(pdev), + "pci_request_selected_regions failed 0x%x\n", err); + goto err_pci_reg; + } + + hw = vmalloc(sizeof(struct txgbe_hw)); + if (!hw) { + pr_info("Unable to allocate memory for early mac check\n"); + } else { + hw->vendor_id = pdev->vendor; + hw->device_id = pdev->device; + vfree(hw); + } + + pci_enable_pcie_error_reporting(pdev); + pci_set_master(pdev); + /* errata 16 */ + pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL, + PCI_EXP_DEVCTL_READRQ, + 0x1000); + + netdev = alloc_etherdev_mq(sizeof(struct txgbe_adapter), indices); + if (!netdev) { + err = -ENOMEM; + goto err_alloc_etherdev; + } + + SET_NETDEV_DEV(netdev, pci_dev_to_dev(pdev)); + + adapter = netdev_priv(netdev); + adapter->netdev = netdev; + adapter->pdev = pdev; + hw = &adapter->hw; + hw->back = adapter; + adapter->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1; + + hw->hw_addr = ioremap(pci_resource_start(pdev, 0), + pci_resource_len(pdev, 0)); + adapter->io_addr = hw->hw_addr; + if (!hw->hw_addr) { + err = -EIO; + goto err_ioremap; + } + + txgbe_assign_netdev_ops(netdev); + strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1); + adapter->bd_number = cards_found; + + /* setup the private structure */ + err = txgbe_sw_init(adapter); + if (err) + goto err_sw_init; + + /* + * check_options must be called before setup_link to set up + * hw->fc completely + */ + txgbe_check_options(adapter); + txgbe_bp_mode_setting(adapter); + TCALL(hw, mac.ops.set_lan_id); + + /* check if flash load is done after hw power up */ + err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PERST); + if (err) + goto err_sw_init; + err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PWRRST); + if (err) + goto err_sw_init; + + /* reset_hw fills in the perm_addr as well */ + hw->phy.reset_if_overtemp = true; + err = TCALL(hw, mac.ops.reset_hw); + hw->phy.reset_if_overtemp = false; + if (err == TXGBE_ERR_SFP_NOT_PRESENT) { + err = 0; + } else if (err == TXGBE_ERR_SFP_NOT_SUPPORTED) { + e_dev_err("failed to load because an unsupported SFP+ " + "module type was detected.\n"); + e_dev_err("Reload the driver after installing a supported " + "module.\n"); + goto err_sw_init; + } else if (err) { + e_dev_err("HW Init failed: %d\n", err); + goto err_sw_init; + } + + netdev->features |= NETIF_F_SG | + NETIF_F_IP_CSUM; + +#ifdef NETIF_F_IPV6_CSUM + netdev->features |= NETIF_F_IPV6_CSUM; +#endif + + netdev->features |= NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_CTAG_RX; + + netdev->features |= txgbe_tso_features(); + + if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) + netdev->features |= NETIF_F_RXHASH; + + netdev->features |= NETIF_F_RXCSUM; + + /* copy netdev features into list of user selectable features */ + hw_features = netdev->hw_features; + hw_features |= netdev->features; + + /* give us the option of enabling RSC/LRO later */ + if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) + hw_features |= NETIF_F_LRO; + + /* set this bit last since it cannot be part of hw_features */ + netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; + + netdev->features |= NETIF_F_NTUPLE; + + adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE; + hw_features |= NETIF_F_NTUPLE; + netdev->hw_features = hw_features; + + netdev->vlan_features |= NETIF_F_SG | + NETIF_F_IP_CSUM | + NETIF_F_IPV6_CSUM | + NETIF_F_TSO | + NETIF_F_TSO6; + + netdev->hw_enc_features |= NETIF_F_SG | NETIF_F_IP_CSUM | + TXGBE_GSO_PARTIAL_FEATURES | NETIF_F_TSO; + if (netdev->features & NETIF_F_LRO) { + if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) && + ((adapter->rx_itr_setting == 1) || + (adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR))) { + adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED; + } else if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) { + e_dev_info("InterruptThrottleRate set too high, " + "disabling RSC\n"); + } + } + + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->priv_flags |= IFF_SUPP_NOFCS; + + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN); + + if (pci_using_dac) { + netdev->features |= NETIF_F_HIGHDMA; + netdev->vlan_features |= NETIF_F_HIGHDMA; + } + + /* make sure the EEPROM is good */ + if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) { + e_dev_err("The EEPROM Checksum Is Not Valid\n"); + err = -EIO; + goto err_sw_init; + } + + memcpy(netdev->dev_addr, hw->mac.perm_addr, netdev->addr_len); + + if (!is_valid_ether_addr(netdev->dev_addr)) { + e_dev_err("invalid MAC address\n"); + err = -EIO; + goto err_sw_init; + } + + txgbe_mac_set_default_filter(adapter, hw->mac.perm_addr); + + timer_setup(&adapter->service_timer, txgbe_service_timer, 0); + + if (TXGBE_REMOVED(hw->hw_addr)) { + err = -EIO; + goto err_sw_init; + } + INIT_WORK(&adapter->service_task, txgbe_service_task); + set_bit(__TXGBE_SERVICE_INITED, &adapter->state); + clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state); + + err = txgbe_init_interrupt_scheme(adapter); + if (err) + goto err_sw_init; + + /* WOL not supported for all devices */ + adapter->wol = 0; + TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS, + &adapter->eeprom_cap); + + if ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP && + hw->bus.lan_id == 0) { + adapter->wol = TXGBE_PSR_WKUP_CTL_MAG; + wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol); + } + hw->wol_enabled = !!(adapter->wol); + + device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol); + + /* + * Save off EEPROM version number and Option Rom version which + * together make a unique identify for the eeprom + */ + TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H, + &eeprom_verh); + TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L, + &eeprom_verl); + etrack_id = (eeprom_verh << 16) | eeprom_verl; + + TCALL(hw, eeprom.ops.read, + hw->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG, &offset); + + /* Make sure offset to SCSI block is valid */ + if (!(offset == 0x0) && !(offset == 0xffff)) { + TCALL(hw, eeprom.ops.read, offset + 0x84, &eeprom_cfg_blkh); + TCALL(hw, eeprom.ops.read, offset + 0x83, &eeprom_cfg_blkl); + + /* Only display Option Rom if exist */ + if (eeprom_cfg_blkl && eeprom_cfg_blkh) { + major = eeprom_cfg_blkl >> 8; + build = (eeprom_cfg_blkl << 8) | (eeprom_cfg_blkh >> 8); + patch = eeprom_cfg_blkh & 0x00ff; + + snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + "0x%08x, %d.%d.%d", etrack_id, major, build, + patch); + } else { + snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + "0x%08x", etrack_id); + } + } else { + snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id), + "0x%08x", etrack_id); + } + + /* reset the hardware with the new settings */ + err = TCALL(hw, mac.ops.start_hw); + if (err == TXGBE_ERR_EEPROM_VERSION) { + /* We are running on a pre-production device, log a warning */ + e_dev_warn("This device is a pre-production adapter/LOM. " + "Please be aware there may be issues associated " + "with your hardware. If you are experiencing " + "problems please contact your hardware " + "representative who provided you with this " + "hardware.\n"); + } else if (err) { + e_dev_err("HW init failed\n"); + goto err_register; + } + + /* pick up the PCI bus settings for reporting later */ + TCALL(hw, mac.ops.get_bus_info); + + strcpy(netdev->name, "eth%d"); + err = register_netdev(netdev); + if (err) + goto err_register; + + pci_set_drvdata(pdev, adapter); + adapter->netdev_registered = true; + + if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)) + /* power down the optics for SFP+ fiber */ + TCALL(hw, mac.ops.disable_tx_laser); + + /* carrier off reporting is important to ethtool even BEFORE open */ + netif_carrier_off(netdev); + /* keep stopping all the transmit queues for older kernels */ + netif_tx_stop_all_queues(netdev); + + /* print all messages at the end so that we use our eth%d name */ + + /* calculate the expected PCIe bandwidth required for optimal + * performance. Note that some older parts will never have enough + * bandwidth due to being older generation PCIe parts. We clamp these + * parts to ensure that no warning is displayed, as this could confuse + * users otherwise. */ + + expected_gts = txgbe_enumerate_functions(adapter) * 10; + + /* don't check link if we failed to enumerate functions */ + if (expected_gts > 0) + txgbe_check_minimum_link(adapter, expected_gts); + + if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) + e_info(probe, "NCSI : support"); + else + e_info(probe, "NCSI : unsupported"); + + /* First try to read PBA as a string */ + err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH); + if (err) + + strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH); + if (txgbe_is_sfp(hw) && hw->phy.sfp_type != txgbe_sfp_type_not_present) + e_info(probe, "PHY: %d, SFP+: %d, PBA No: %s\n", + hw->phy.type, hw->phy.sfp_type, part_str); + else + e_info(probe, "PHY: %d, PBA No: %s\n", + hw->phy.type, part_str); + + e_dev_info("%02x:%02x:%02x:%02x:%02x:%02x\n", + netdev->dev_addr[0], netdev->dev_addr[1], + netdev->dev_addr[2], netdev->dev_addr[3], + netdev->dev_addr[4], netdev->dev_addr[5]); + +#define INFO_STRING_LEN 255 + info_string = kzalloc(INFO_STRING_LEN, GFP_KERNEL); + if (!info_string) { + e_err(probe, "allocation for info string failed\n"); + goto no_info_string; + } + i_s_var = info_string; + i_s_var += sprintf(info_string, "Enabled Features: "); + i_s_var += sprintf(i_s_var, "RxQ: %d TxQ: %d ", + adapter->num_rx_queues, adapter->num_tx_queues); + if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) + i_s_var += sprintf(i_s_var, "FdirHash "); + if (adapter->flags & TXGBE_FLAG_DCB_ENABLED) + i_s_var += sprintf(i_s_var, "DCB "); + if (adapter->flags & TXGBE_FLAG_TPH_ENABLED) + i_s_var += sprintf(i_s_var, "TPH "); + if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) + i_s_var += sprintf(i_s_var, "RSC "); + if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE) + i_s_var += sprintf(i_s_var, "vxlan_rx "); + + BUG_ON(i_s_var > (info_string + INFO_STRING_LEN)); + /* end features printing */ + e_info(probe, "%s\n", info_string); + kfree(info_string); +no_info_string: + /* firmware requires blank driver version */ + TCALL(hw, mac.ops.set_fw_drv_ver, 0xFF, 0xFF, 0xFF, 0xFF); + + /* add san mac addr to netdev */ + txgbe_add_sanmac_netdev(netdev); + + e_info(probe, "WangXun(R) 10 Gigabit Network Connection\n"); + cards_found++; + + /* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */ + if (txgbe_mng_present(hw) && txgbe_is_sfp(hw)) + TCALL(hw, mac.ops.setup_link, + TXGBE_LINK_SPEED_10GB_FULL | TXGBE_LINK_SPEED_1GB_FULL, + true); + + TCALL(hw, mac.ops.setup_eee, + (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE) && + (adapter->flags2 & TXGBE_FLAG2_EEE_ENABLED)); + + return 0; + +err_register: + txgbe_clear_interrupt_scheme(adapter); + txgbe_release_hw_control(adapter); +err_sw_init: + adapter->flags2 &= ~TXGBE_FLAG2_SEARCH_FOR_SFP; + kfree(adapter->mac_table); + iounmap(adapter->io_addr); +err_ioremap: + disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state); + free_netdev(netdev); +err_alloc_etherdev: + pci_release_selected_regions(pdev, + pci_select_bars(pdev, IORESOURCE_MEM)); +err_pci_reg: +err_dma: + if (!adapter || disable_dev) + pci_disable_device(pdev); + return err; +} + +/** + * txgbe_remove - Device Removal Routine + * @pdev: PCI device information struct + * + * txgbe_remove is called by the PCI subsystem to alert the driver + * that it should release a PCI device. The could be caused by a + * Hot-Plug event, or because the driver is going to be removed from + * memory. + **/ +static void txgbe_remove(struct pci_dev *pdev) +{ + struct txgbe_adapter *adapter = pci_get_drvdata(pdev); + struct net_device *netdev; + bool disable_dev; + + /* if !adapter then we already cleaned up in probe */ + if (!adapter) + return; + + netdev = adapter->netdev; + set_bit(__TXGBE_REMOVING, &adapter->state); + cancel_work_sync(&adapter->service_task); + + /* remove the added san mac */ + txgbe_del_sanmac_netdev(netdev); + + if (adapter->netdev_registered) { + unregister_netdev(netdev); + adapter->netdev_registered = false; + } + + txgbe_clear_interrupt_scheme(adapter); + txgbe_release_hw_control(adapter); + + iounmap(adapter->io_addr); + pci_release_selected_regions(pdev, + pci_select_bars(pdev, IORESOURCE_MEM)); + + kfree(adapter->mac_table); + disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state); + free_netdev(netdev); + + pci_disable_pcie_error_reporting(pdev); + + if (disable_dev) + pci_disable_device(pdev); +} + +static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev) +{ + u16 value; + + pci_read_config_word(pdev, PCI_VENDOR_ID, &value); + if (value == TXGBE_FAILED_READ_CFG_WORD) { + txgbe_remove_adapter(hw); + return true; + } + return false; +} + +u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg) +{ + struct txgbe_adapter *adapter = hw->back; + u16 value; + + if (TXGBE_REMOVED(hw->hw_addr)) + return TXGBE_FAILED_READ_CFG_WORD; + pci_read_config_word(adapter->pdev, reg, &value); + if (value == TXGBE_FAILED_READ_CFG_WORD && + txgbe_check_cfg_remove(hw, adapter->pdev)) + return TXGBE_FAILED_READ_CFG_WORD; + return value; +} + +void txgbe_write_pci_cfg_word(struct txgbe_hw *hw, u32 reg, u16 value) +{ + struct txgbe_adapter *adapter = hw->back; + + if (TXGBE_REMOVED(hw->hw_addr)) + return; + pci_write_config_word(adapter->pdev, reg, value); +} + +static struct pci_driver txgbe_driver = { + .name = txgbe_driver_name, + .id_table = txgbe_pci_tbl, + .probe = txgbe_probe, + .remove = txgbe_remove, +#ifdef CONFIG_PM + .suspend = txgbe_suspend, + .resume = txgbe_resume, +#endif + .shutdown = txgbe_shutdown, +}; + +/** + * txgbe_init_module - Driver Registration Routine + * + * txgbe_init_module is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + **/ +static int __init txgbe_init_module(void) +{ + int ret; + pr_info("%s - version %s\n", txgbe_driver_string, txgbe_driver_version); + pr_info("%s\n", txgbe_copyright); + + txgbe_wq = create_singlethread_workqueue(txgbe_driver_name); + if (!txgbe_wq) { + pr_err("%s: Failed to create workqueue\n", txgbe_driver_name); + return -ENOMEM; + } + + ret = pci_register_driver(&txgbe_driver); + return ret; +} + +module_init(txgbe_init_module); + +/** + * txgbe_exit_module - Driver Exit Cleanup Routine + * + * txgbe_exit_module is called just before the driver is removed + * from memory. + **/ +static void __exit txgbe_exit_module(void) +{ + pci_unregister_driver(&txgbe_driver); + if (txgbe_wq) { + destroy_workqueue(txgbe_wq); + } +} + +module_exit(txgbe_exit_module); + +/* txgbe_main.c */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c new file mode 100644 index 000000000000..08c67fdccc16 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c @@ -0,0 +1,399 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_mbx.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#include "txgbe.h" +#include "txgbe_mbx.h" + +/** + * txgbe_read_mbx - Reads a message from the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to read + * + * returns SUCCESS if it successfuly read message from buffer + **/ +int txgbe_read_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err = TXGBE_ERR_MBX; + + /* limit read to size of mailbox */ + if (size > mbx->size) + size = mbx->size; + + err = TCALL(hw, mbx.ops.read, msg, size, mbx_id); + + return err; +} + +/** + * txgbe_write_mbx - Write a message to the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully copied message into the buffer + **/ +int txgbe_write_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err = 0; + + if (size > mbx->size) { + err = TXGBE_ERR_MBX; + ERROR_REPORT2(TXGBE_ERROR_ARGUMENT, + "Invalid mailbox message size %d", size); + } else + err = TCALL(hw, mbx.ops.write, msg, size, mbx_id); + + return err; +} + +/** + * txgbe_check_for_msg - checks to see if someone sent us mail + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the Status bit was found or else ERR_MBX + **/ +int txgbe_check_for_msg(struct txgbe_hw *hw, u16 mbx_id) +{ + int err = TXGBE_ERR_MBX; + + err = TCALL(hw, mbx.ops.check_for_msg, mbx_id); + + return err; +} + +/** + * txgbe_check_for_ack - checks to see if someone sent us ACK + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the Status bit was found or else ERR_MBX + **/ +int txgbe_check_for_ack(struct txgbe_hw *hw, u16 mbx_id) +{ + int err = TXGBE_ERR_MBX; + + err = TCALL(hw, mbx.ops.check_for_ack, mbx_id); + + return err; +} + +/** + * txgbe_check_for_rst - checks to see if other side has reset + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns SUCCESS if the Status bit was found or else ERR_MBX + **/ +int txgbe_check_for_rst(struct txgbe_hw *hw, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int err = TXGBE_ERR_MBX; + + if (mbx->ops.check_for_rst) + err = mbx->ops.check_for_rst(hw, mbx_id); + + return err; +} + +/** + * txgbe_poll_for_msg - Wait for message notification + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message notification + **/ +int txgbe_poll_for_msg(struct txgbe_hw *hw, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int countdown = mbx->timeout; + + if (!countdown || !mbx->ops.check_for_msg) + goto out; + + while (countdown && TCALL(hw, mbx.ops.check_for_msg, mbx_id)) { + countdown--; + if (!countdown) + break; + udelay(mbx->udelay); + } + + if (countdown == 0) + ERROR_REPORT2(TXGBE_ERROR_POLLING, + "Polling for VF%d mailbox message timedout", mbx_id); + +out: + return countdown ? 0 : TXGBE_ERR_MBX; +} + +/** + * txgbe_poll_for_ack - Wait for message acknowledgement + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to write + * + * returns SUCCESS if it successfully received a message acknowledgement + **/ +int txgbe_poll_for_ack(struct txgbe_hw *hw, u16 mbx_id) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + int countdown = mbx->timeout; + + if (!countdown || !mbx->ops.check_for_ack) + goto out; + + while (countdown && TCALL(hw, mbx.ops.check_for_ack, mbx_id)) { + countdown--; + if (!countdown) + break; + udelay(mbx->udelay); + } + + if (countdown == 0) + ERROR_REPORT2(TXGBE_ERROR_POLLING, + "Polling for VF%d mailbox ack timedout", mbx_id); + +out: + return countdown ? 0 : TXGBE_ERR_MBX; +} + +int txgbe_check_for_bit_pf(struct txgbe_hw *hw, u32 mask, int index) +{ + u32 mbvficr = rd32(hw, TXGBE_MBVFICR(index)); + int err = TXGBE_ERR_MBX; + + if (mbvficr & mask) { + err = 0; + wr32(hw, TXGBE_MBVFICR(index), mask); + } + + return err; +} + +/** + * txgbe_check_for_msg_pf - checks to see if the VF has sent mail + * @hw: pointer to the HW structure + * @vf: the VF index + * + * returns SUCCESS if the VF has set the Status bit or else ERR_MBX + **/ +int txgbe_check_for_msg_pf(struct txgbe_hw *hw, u16 vf) +{ + int err = TXGBE_ERR_MBX; + int index = TXGBE_MBVFICR_INDEX(vf); + u32 vf_bit = vf % 16; + + if (!txgbe_check_for_bit_pf(hw, TXGBE_MBVFICR_VFREQ_VF1 << vf_bit, + index)) { + err = 0; + hw->mbx.stats.reqs++; + } + + return err; +} + +/** + * txgbe_check_for_ack_pf - checks to see if the VF has ACKed + * @hw: pointer to the HW structure + * @vf: the VF index + * + * returns SUCCESS if the VF has set the Status bit or else ERR_MBX + **/ +int txgbe_check_for_ack_pf(struct txgbe_hw *hw, u16 vf) +{ + int err = TXGBE_ERR_MBX; + int index = TXGBE_MBVFICR_INDEX(vf); + u32 vf_bit = vf % 16; + + if (!txgbe_check_for_bit_pf(hw, TXGBE_MBVFICR_VFACK_VF1 << vf_bit, + index)) { + err = 0; + hw->mbx.stats.acks++; + } + + return err; +} + +/** + * txgbe_check_for_rst_pf - checks to see if the VF has reset + * @hw: pointer to the HW structure + * @vf: the VF index + * + * returns SUCCESS if the VF has set the Status bit or else ERR_MBX + **/ +int txgbe_check_for_rst_pf(struct txgbe_hw *hw, u16 vf) +{ + u32 reg_offset = (vf < 32) ? 0 : 1; + u32 vf_shift = vf % 32; + u32 vflre = 0; + int err = TXGBE_ERR_MBX; + + vflre = rd32(hw, TXGBE_VFLRE(reg_offset)); + + if (vflre & (1 << vf_shift)) { + err = 0; + wr32(hw, TXGBE_VFLREC(reg_offset), (1 << vf_shift)); + hw->mbx.stats.rsts++; + } + + return err; +} + +/** + * txgbe_obtain_mbx_lock_pf - obtain mailbox lock + * @hw: pointer to the HW structure + * @vf: the VF index + * + * return SUCCESS if we obtained the mailbox lock + **/ +int txgbe_obtain_mbx_lock_pf(struct txgbe_hw *hw, u16 vf) +{ + int err = TXGBE_ERR_MBX; + u32 mailbox; + + /* Take ownership of the buffer */ + wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_PFU); + + /* reserve mailbox for vf use */ + mailbox = rd32(hw, TXGBE_PXMAILBOX(vf)); + if (mailbox & TXGBE_PXMAILBOX_PFU) + err = 0; + else + ERROR_REPORT2(TXGBE_ERROR_POLLING, + "Failed to obtain mailbox lock for PF%d", vf); + + + return err; +} + +/** + * txgbe_write_mbx_pf - Places a message in the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @vf: the VF index + * + * returns SUCCESS if it successfully copied message into the buffer + **/ +int txgbe_write_mbx_pf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 vf) +{ + int err; + u16 i; + + /* lock the mailbox to prevent pf/vf race condition */ + err = txgbe_obtain_mbx_lock_pf(hw, vf); + if (err) + goto out_no_write; + + /* flush msg and acks as we are overwriting the message buffer */ + txgbe_check_for_msg_pf(hw, vf); + txgbe_check_for_ack_pf(hw, vf); + + /* copy the caller specified message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + wr32a(hw, TXGBE_PXMBMEM(vf), i, msg[i]); + + /* Interrupt VF to tell it a message has been sent and release buffer*/ + /* set mirrored mailbox flags */ + wr32a(hw, TXGBE_PXMBMEM(vf), TXGBE_VXMAILBOX_SIZE, TXGBE_PXMAILBOX_STS); + wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_STS); + + /* update stats */ + hw->mbx.stats.msgs_tx++; + +out_no_write: + return err; + +} + +/** + * txgbe_read_mbx_pf - Read a message from the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @vf: the VF index + * + * This function copies a message from the mailbox buffer to the caller's + * memory buffer. The presumption is that the caller knows that there was + * a message due to a VF request so no polling for message is needed. + **/ +int txgbe_read_mbx_pf(struct txgbe_hw *hw, u32 *msg, u16 size, + u16 vf) +{ + int err; + u16 i; + + /* lock the mailbox to prevent pf/vf race condition */ + err = txgbe_obtain_mbx_lock_pf(hw, vf); + if (err) + goto out_no_read; + + /* copy the message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + msg[i] = rd32a(hw, TXGBE_PXMBMEM(vf), i); + + /* Acknowledge the message and release buffer */ + /* set mirrored mailbox flags */ + wr32a(hw, TXGBE_PXMBMEM(vf), TXGBE_VXMAILBOX_SIZE, TXGBE_PXMAILBOX_ACK); + wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_ACK); + + /* update stats */ + hw->mbx.stats.msgs_rx++; + +out_no_read: + return err; +} + +/** + * txgbe_init_mbx_params_pf - set initial values for pf mailbox + * @hw: pointer to the HW structure + * + * Initializes the hw->mbx struct to correct values for pf mailbox + */ +void txgbe_init_mbx_params_pf(struct txgbe_hw *hw) +{ + struct txgbe_mbx_info *mbx = &hw->mbx; + + mbx->timeout = 0; + mbx->udelay = 0; + + mbx->size = TXGBE_VXMAILBOX_SIZE; + + mbx->ops.read = txgbe_read_mbx_pf; + mbx->ops.write = txgbe_write_mbx_pf; + mbx->ops.check_for_msg = txgbe_check_for_msg_pf; + mbx->ops.check_for_ack = txgbe_check_for_ack_pf; + mbx->ops.check_for_rst = txgbe_check_for_rst_pf; + + mbx->stats.msgs_tx = 0; + mbx->stats.msgs_rx = 0; + mbx->stats.reqs = 0; + mbx->stats.acks = 0; + mbx->stats.rsts = 0; +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h new file mode 100644 index 000000000000..e412a5e546e1 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h @@ -0,0 +1,171 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_mbx.h, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + +#ifndef _TXGBE_MBX_H_ +#define _TXGBE_MBX_H_ + +#define TXGBE_VXMAILBOX_SIZE (16 - 1) + +/** + * VF Registers + **/ +#define TXGBE_VXMAILBOX 0x00600 +#define TXGBE_VXMAILBOX_REQ ((0x1) << 0) /* Request for PF Ready bit */ +#define TXGBE_VXMAILBOX_ACK ((0x1) << 1) /* Ack PF message received */ +#define TXGBE_VXMAILBOX_VFU ((0x1) << 2) /* VF owns the mailbox buffer */ +#define TXGBE_VXMAILBOX_PFU ((0x1) << 3) /* PF owns the mailbox buffer */ +#define TXGBE_VXMAILBOX_PFSTS ((0x1) << 4) /* PF wrote a message in the MB */ +#define TXGBE_VXMAILBOX_PFACK ((0x1) << 5) /* PF ack the previous VF msg */ +#define TXGBE_VXMAILBOX_RSTI ((0x1) << 6) /* PF has reset indication */ +#define TXGBE_VXMAILBOX_RSTD ((0x1) << 7) /* PF has indicated reset done */ +#define TXGBE_VXMAILBOX_R2C_BITS (TXGBE_VXMAILBOX_RSTD | \ + TXGBE_VXMAILBOX_PFSTS | TXGBE_VXMAILBOX_PFACK) + +#define TXGBE_VXMBMEM 0x00C00 /* 16*4B */ + +/** + * PF Registers + **/ +#define TXGBE_PXMAILBOX(i) (0x00600 + (4 * (i))) /* i=[0,63] */ +#define TXGBE_PXMAILBOX_STS ((0x1) << 0) /* Initiate message send to VF */ +#define TXGBE_PXMAILBOX_ACK ((0x1) << 1) /* Ack message recv'd from VF */ +#define TXGBE_PXMAILBOX_VFU ((0x1) << 2) /* VF owns the mailbox buffer */ +#define TXGBE_PXMAILBOX_PFU ((0x1) << 3) /* PF owns the mailbox buffer */ +#define TXGBE_PXMAILBOX_RVFU ((0x1) << 4) /* Reset VFU - used when VF stuck*/ + +#define TXGBE_PXMBMEM(i) (0x5000 + (64 * (i))) /* i=[0,63] */ + +#define TXGBE_VFLRP(i) (0x00490 + (4 * (i))) /* i=[0,1] */ +#define TXGBE_VFLRE(i) (0x004A0 + (4 * (i))) /* i=[0,1] */ +#define TXGBE_VFLREC(i) (0x004A8 + (4 * (i))) /* i=[0,1] */ + +/* SR-IOV specific macros */ +#define TXGBE_MBVFICR(i) (0x00480 + (4 * (i))) /* i=[0,3] */ +#define TXGBE_MBVFICR_INDEX(vf) ((vf) >> 4) +#define TXGBE_MBVFICR_VFREQ_MASK (0x0000FFFF) /* bits for VF messages */ +#define TXGBE_MBVFICR_VFREQ_VF1 (0x00000001) /* bit for VF 1 message */ +#define TXGBE_MBVFICR_VFACK_MASK (0xFFFF0000) /* bits for VF acks */ +#define TXGBE_MBVFICR_VFACK_VF1 (0x00010000) /* bit for VF 1 ack */ + +/** + * Messages + **/ +/* If it's a TXGBE_VF_* msg then it originates in the VF and is sent to the + * PF. The reverse is true if it is TXGBE_PF_*. + * Message ACK's are the value or'd with 0xF0000000 + */ +#define TXGBE_VT_MSGTYPE_ACK 0x80000000 /* Messages below or'd with + * this are the ACK */ +#define TXGBE_VT_MSGTYPE_NACK 0x40000000 /* Messages below or'd with + * this are the NACK */ +#define TXGBE_VT_MSGTYPE_CTS 0x20000000 /* Indicates that VF is still + * clear to send requests */ +#define TXGBE_VT_MSGINFO_SHIFT 16 +/* bits 23:16 are used for extra info for certain messages */ +#define TXGBE_VT_MSGINFO_MASK (0xFF << TXGBE_VT_MSGINFO_SHIFT) + +/* definitions to support mailbox API version negotiation */ + +/* + * each element denotes a version of the API; existing numbers may not + * change; any additions must go at the end + */ +enum txgbe_pfvf_api_rev { + txgbe_mbox_api_null, + txgbe_mbox_api_10, /* API version 1.0, linux/freebsd VF driver */ + txgbe_mbox_api_11, /* API version 1.1, linux/freebsd VF driver */ + txgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */ + txgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ + txgbe_mbox_api_20, /* API version 2.0, solaris Phase1 VF driver */ + txgbe_mbox_api_unknown, /* indicates that API version is not known */ +}; + +/* mailbox API, legacy requests */ +#define TXGBE_VF_RESET 0x01 /* VF requests reset */ +#define TXGBE_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */ +#define TXGBE_VF_SET_MULTICAST 0x03 /* VF requests PF to set MC addr */ +#define TXGBE_VF_SET_VLAN 0x04 /* VF requests PF to set VLAN */ + +/* mailbox API, version 1.0 VF requests */ +#define TXGBE_VF_SET_LPE 0x05 /* VF requests PF to set VMOLR.LPE */ +#define TXGBE_VF_SET_MACVLAN 0x06 /* VF requests PF for unicast filter */ +#define TXGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */ + +/* mailbox API, version 1.1 VF requests */ +#define TXGBE_VF_GET_QUEUES 0x09 /* get queue configuration */ + +/* mailbox API, version 1.2 VF requests */ +#define TXGBE_VF_GET_RETA 0x0a /* VF request for RETA */ +#define TXGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */ +#define TXGBE_VF_UPDATE_XCAST_MODE 0x0c +#define TXGBE_VF_BACKUP 0x8001 /* VF requests backup */ + +/* mode choices for IXGBE_VF_UPDATE_XCAST_MODE */ +enum txgbevf_xcast_modes { + TXGBEVF_XCAST_MODE_NONE = 0, + TXGBEVF_XCAST_MODE_MULTI, + TXGBEVF_XCAST_MODE_ALLMULTI, + TXGBEVF_XCAST_MODE_PROMISC, +}; + +/* GET_QUEUES return data indices within the mailbox */ +#define TXGBE_VF_TX_QUEUES 1 /* number of Tx queues supported */ +#define TXGBE_VF_RX_QUEUES 2 /* number of Rx queues supported */ +#define TXGBE_VF_TRANS_VLAN 3 /* Indication of port vlan */ +#define TXGBE_VF_DEF_QUEUE 4 /* Default queue offset */ + +/* length of permanent address message returned from PF */ +#define TXGBE_VF_PERMADDR_MSG_LEN 4 +/* word in permanent address message with the current multicast type */ +#define TXGBE_VF_MC_TYPE_WORD 3 + +#define TXGBE_PF_CONTROL_MSG 0x0100 /* PF control message */ + +/* mailbox API, version 2.0 VF requests */ +#define TXGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */ +#define TXGBE_VF_GET_QUEUES 0x09 /* get queue configuration */ +#define TXGBE_VF_ENABLE_MACADDR 0x0A /* enable MAC address */ +#define TXGBE_VF_DISABLE_MACADDR 0x0B /* disable MAC address */ +#define TXGBE_VF_GET_MACADDRS 0x0C /* get all configured MAC addrs */ +#define TXGBE_VF_SET_MCAST_PROMISC 0x0D /* enable multicast promiscuous */ +#define TXGBE_VF_GET_MTU 0x0E /* get bounds on MTU */ +#define TXGBE_VF_SET_MTU 0x0F /* set a specific MTU */ + +/* mailbox API, version 2.0 PF requests */ +#define TXGBE_PF_TRANSPARENT_VLAN 0x0101 /* enable transparent vlan */ + +#define TXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define TXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ + +int txgbe_read_mbx(struct txgbe_hw *, u32 *, u16, u16); +int txgbe_write_mbx(struct txgbe_hw *, u32 *, u16, u16); +int txgbe_read_posted_mbx(struct txgbe_hw *, u32 *, u16, u16); +int txgbe_write_posted_mbx(struct txgbe_hw *, u32 *, u16, u16); +int txgbe_check_for_msg(struct txgbe_hw *, u16); +int txgbe_check_for_ack(struct txgbe_hw *, u16); +int txgbe_check_for_rst(struct txgbe_hw *, u16); +void txgbe_init_mbx_ops(struct txgbe_hw *hw); +void txgbe_init_mbx_params_vf(struct txgbe_hw *); +void txgbe_init_mbx_params_pf(struct txgbe_hw *); + +#endif /* _TXGBE_MBX_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c new file mode 100644 index 000000000000..5c29a28af075 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c @@ -0,0 +1,1366 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + + +#include "txgbe.h" + +MTD_STATUS mtdHwXmdioWrite( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + IN MTD_U16 value) +{ + MTD_STATUS result = MTD_OK; + + if (devPtr->fmtdWriteMdio != NULL) { + if (devPtr->fmtdWriteMdio(devPtr, port, dev, reg, value) == MTD_FAIL) { + result = MTD_FAIL; + MTD_DBG_INFO("fmtdWriteMdio 0x%04X failed to port=%d, dev=%d, reg=0x%04X\n", + (unsigned)(value), (unsigned)port, (unsigned)dev, (unsigned)reg); + } + } else + result = MTD_FAIL; + + return result; +} + +MTD_STATUS mtdHwXmdioRead( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + OUT MTD_U16 * data) +{ + MTD_STATUS result = MTD_OK; + + if (devPtr->fmtdReadMdio != NULL) { + if (devPtr->fmtdReadMdio(devPtr, port, dev, reg, data) == MTD_FAIL) { + result = MTD_FAIL; + MTD_DBG_INFO("fmtdReadMdio failed from port=%d, dev=%d, reg=0x%04X\n", + (unsigned)port, (unsigned)dev, (unsigned)reg); + } + } else + result = MTD_FAIL; + + return result; +} + +/* + This macro calculates the mask for partial read/write of register's data. +*/ +#define MTD_CALC_MASK(fieldOffset, fieldLen, mask) do {\ + if ((fieldLen + fieldOffset) >= 16) \ + mask = (0 - (1 << fieldOffset)); \ + else \ + mask = (((1 << (fieldLen + fieldOffset))) - (1 << fieldOffset));\ + } while (0) + +MTD_STATUS mtdHwGetPhyRegField( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 * data) +{ + MTD_U16 tmpData; + MTD_STATUS retVal; + + retVal = mtdHwXmdioRead(devPtr, port, dev, regAddr, &tmpData); + + if (retVal != MTD_OK) { + MTD_DBG_ERROR("Failed to read register \n"); + return MTD_FAIL; + } + + mtdHwGetRegFieldFromWord(tmpData, fieldOffset, fieldLength, data); + + MTD_DBG_INFO("fOff %d, fLen %d, data 0x%04X.\n", (int)fieldOffset, + (int)fieldLength, (int)*data); + + return MTD_OK; +} + +MTD_STATUS mtdHwSetPhyRegField( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + IN MTD_U16 data) +{ + MTD_U16 tmpData, newData; + MTD_STATUS retVal; + + retVal = mtdHwXmdioRead(devPtr, port, dev, regAddr, &tmpData); + if (retVal != MTD_OK) { + MTD_DBG_ERROR("Failed to read register \n"); + return MTD_FAIL; + } + + mtdHwSetRegFieldToWord(tmpData, data, fieldOffset, fieldLength, &newData); + + retVal = mtdHwXmdioWrite(devPtr, port, dev, regAddr, newData); + + if (retVal != MTD_OK) { + MTD_DBG_ERROR("Failed to write register \n"); + return MTD_FAIL; + } + + MTD_DBG_INFO("fieldOff %d, fieldLen %d, data 0x%x.\n", fieldOffset, + fieldLength, data); + + return MTD_OK; +} + +MTD_STATUS mtdHwGetRegFieldFromWord( + IN MTD_U16 regData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data) +{ + /* Bits mask to be read */ + MTD_U16 mask; + + MTD_CALC_MASK(fieldOffset, fieldLength, mask); + + *data = (regData & mask) >> fieldOffset; + + return MTD_OK; +} + +MTD_STATUS mtdHwSetRegFieldToWord( + IN MTD_U16 regData, + IN MTD_U16 bitFieldData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data) +{ + /* Bits mask to be read */ + MTD_U16 mask; + + MTD_CALC_MASK(fieldOffset, fieldLength, mask); + + /* Set the desired bits to 0. */ + regData &= ~mask; + /* Set the given data into the above reset bits.*/ + regData |= ((bitFieldData << fieldOffset) & mask); + + *data = regData; + + return MTD_OK; +} + +MTD_STATUS mtdWait(IN MTD_UINT x) +{ + msleep(x); + return MTD_OK; +} + +/* internal device registers */ +MTD_STATUS mtdCheckDeviceCapabilities( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL * phyHasMacsec, + OUT MTD_BOOL * phyHasCopperInterface, + OUT MTD_BOOL * isE20X0Device) +{ + MTD_U8 major, minor, inc, test; + MTD_U16 abilities; + + *phyHasMacsec = MTD_TRUE; + *phyHasCopperInterface = MTD_TRUE; + *isE20X0Device = MTD_FALSE; + + if (mtdGetFirmwareVersion(devPtr, port, &major, &minor, &inc, &test) == MTD_FAIL) { + /* firmware not running will produce this case */ + major = minor = inc = test = 0; + } + + if (major == 0 && minor == 0 && inc == 0 && test == 0) { + /* no code loaded into internal processor */ + /* have to read it from the device itself the hard way */ + MTD_U16 reg2, reg3; + MTD_U16 index, index2; + MTD_U16 temp; + MTD_U16 bit16thru23[8]; + + /* save these registers */ + /* ATTEMPT(mtdHwXmdioRead(devPtr,port,MTD_REG_CCCR9,®1)); some revs can't read this register reliably */ + ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F0, ®2)); + ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, ®3)); + + /* clear these bit indications */ + for (index = 0; index < 8; index++) { + bit16thru23[index] = 0; + } + + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF05E, 0x0300)); /* force clock on */ + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F0, 0x0102)); /* set access */ + mtdWait(1); + + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x06D3)); /* sequence needed */ + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0593)); + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513)); + mtdWait(1); + + index = 0; + index2 = 0; + while (index < 24) { + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413)); + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513)); + mtdWait(1); + + if (index >= 16) { + ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, &bit16thru23[index2++])); + } else { + ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, &temp)); + } + mtdWait(1); + index++; + } + + if (((bit16thru23[0] >> 11) & 1) | ((bit16thru23[1] >> 11) & 1)) { + *phyHasMacsec = MTD_FALSE; + } + if (((bit16thru23[4] >> 11) & 1) | ((bit16thru23[5] >> 11) & 1)) { + *phyHasCopperInterface = MTD_FALSE; + } + + if (((bit16thru23[6] >> 11) & 1) | ((bit16thru23[7] >> 11) & 1)) { + *isE20X0Device = MTD_TRUE; + } + + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413)); + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0493)); + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413)); + mtdWait(1); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513)); + mtdWait(1); + + /* restore the registers */ + /* ATTEMPT(mtdHwXmdioWrite(devPtr,port,MTD_REG_CCCR9,reg1)); Some revs can't read this register reliably */ + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF05E, 0x5440)); /* set back to reset value */ + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F0, reg2)); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, reg3)); + + } else { + /* should just read it from the firmware status register */ + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_XG_EXT_STATUS, &abilities)); + if (abilities & (1 << 12)) { + *phyHasMacsec = MTD_FALSE; + } + + if (abilities & (1 << 13)) { + *phyHasCopperInterface = MTD_FALSE; + } + + if (abilities & (1 << 14)) { + *isE20X0Device = MTD_TRUE; + } + + } + + return MTD_OK; +} + +MTD_STATUS mtdIsPhyReadyAfterReset( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL * phyReady) +{ + MTD_U16 val; + + *phyReady = MTD_FALSE; + + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 15, 1, &val)); + + if (val) { + /* if still in reset return '0' (could be coming up, or disabled by download mode) */ + *phyReady = MTD_FALSE; + } else { + /* if Phy is in normal operation */ + *phyReady = MTD_TRUE; + } + + return MTD_OK; +} + +MTD_STATUS mtdSoftwareReset( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 timeoutMs) +{ + MTD_U16 counter; + MTD_BOOL phyReady; + /* bit self clears when done */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 15, 1, 1)); + + if (timeoutMs) { + counter = 0; + ATTEMPT(mtdIsPhyReadyAfterReset(devPtr, port, &phyReady)); + while (phyReady == MTD_FALSE && counter <= timeoutMs) { + ATTEMPT(mtdWait(1)); + ATTEMPT(mtdIsPhyReadyAfterReset(devPtr, port, &phyReady)); + counter++; + } + + if (counter < timeoutMs) { + return MTD_OK; + } else { + /* timed out without becoming ready */ + return MTD_FAIL; + } + } else { + return MTD_OK; + } +} + +MTD_STATUS mtdIsPhyReadyAfterHardwareReset( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *phyReady) +{ + MTD_U16 val; + + *phyReady = MTD_FALSE; + + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 14, 1, &val)); + + if (val) { + /* if still in reset return '0' (could be coming up, or disabled by download mode) */ + *phyReady = MTD_FALSE; + } else { + /* if Phy is in normal operation */ + *phyReady = MTD_TRUE; + } + return MTD_OK; +} + +MTD_STATUS mtdHardwareReset( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 timeoutMs) +{ + MTD_U16 counter; + MTD_BOOL phyReady; + + /* bit self clears when done */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 14, 1, 1)); + + if (timeoutMs) { + counter = 0; + ATTEMPT(mtdIsPhyReadyAfterHardwareReset(devPtr, port, &phyReady)); + while (phyReady == MTD_FALSE && counter <= timeoutMs) { + ATTEMPT(mtdWait(1)); + ATTEMPT(mtdIsPhyReadyAfterHardwareReset(devPtr, port, &phyReady)); + counter++; + } + if (counter < timeoutMs) + return MTD_OK; + else + return MTD_FAIL; /* timed out without becoming ready */ + } else { + return MTD_OK; + } +} + +/****************************************************************************/ + +/****************************************************************************/ +/******************************************************************* + 802.3 Clause 28 and Clause 45 + Autoneg Related Control & Status + *******************************************************************/ +/******************************************************************* + Enabling speeds for autonegotiation + Reading speeds enabled for autonegotation + Set/get pause advertisement for autonegotiation + Other Autoneg-related Control and Status (restart,disable/enable, + force master/slave/auto, checking for autoneg resolution, etc.) + *******************************************************************/ + +#define MTD_7_0010_SPEED_BIT_LENGTH 4 +#define MTD_7_0010_SPEED_BIT_POS 5 +#define MTD_7_8000_SPEED_BIT_LENGTH 2 +#define MTD_7_8000_SPEED_BIT_POS 8 +#define MTD_7_0020_SPEED_BIT_LENGTH 1 /* for 88X32X0 family and 88X33X0 family */ +#define MTD_7_0020_SPEED_BIT_POS 12 +#define MTD_7_0020_SPEED_BIT_LENGTH2 2 /* for 88X33X0 family A0 revision 2.5/5G */ +#define MTD_7_0020_SPEED_BIT_POS2 7 + +/* Bit defines for speed bits */ +#define MTD_FORCED_SPEEDS_BIT_MASK (MTD_SPEED_10M_HD_AN_DIS | MTD_SPEED_10M_FD_AN_DIS | \ + MTD_SPEED_100M_HD_AN_DIS | MTD_SPEED_100M_FD_AN_DIS) +#define MTD_LOWER_BITS_MASK 0x000F /* bits in base page */ +#define MTD_GIG_SPEED_POS 4 +#define MTD_XGIG_SPEED_POS 6 +#define MTD_2P5G_SPEED_POS 11 +#define MTD_5G_SPEED_POS 12 +#define MTD_GET_1000BT_BITS(_speedBits) ((_speedBits & (MTD_SPEED_1GIG_HD | MTD_SPEED_1GIG_FD)) \ + >> MTD_GIG_SPEED_POS) /* 1000BT bits */ +#define MTD_GET_10GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_10GIG_FD) \ + >> MTD_XGIG_SPEED_POS) /* 10GBT bit setting */ +#define MTD_GET_2P5GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_2P5GIG_FD) \ + >> MTD_2P5G_SPEED_POS) /* 2.5GBT bit setting */ +#define MTD_GET_5GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_5GIG_FD) \ + >> MTD_5G_SPEED_POS) /* 5GBT bit setting */ + +MTD_STATUS mtdEnableSpeeds( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 speed_bits, + IN MTD_BOOL anRestart) +{ + MTD_BOOL speedForced; + MTD_U16 dummy; + MTD_U16 tempRegValue; + + if (speed_bits & MTD_FORCED_SPEEDS_BIT_MASK) { + /* tried to force the speed, this function is for autonegotiation control */ + return MTD_FAIL; + } + + if (MTD_IS_X32X0_BASE(devPtr->deviceId) && ((speed_bits & MTD_SPEED_2P5GIG_FD) || + (speed_bits & MTD_SPEED_5GIG_FD))) { + return MTD_FAIL; /* tried to advertise 2.5G/5G on a 88X32X0 chipset */ + } + + if (MTD_IS_X33X0_BASE(devPtr->deviceId)) { + const MTD_U16 chipRev = (devPtr->deviceId & 0xf); /* get the chip revision */ + + if (chipRev == 9 || chipRev == 5 || chipRev == 1 || /* Z2 chip revisions */ + chipRev == 8 || chipRev == 4 || chipRev == 0) /* Z1 chip revisions */ { + /* this is an X33X0 or E20X0 Z2/Z1 device and not supported (not compatible with A0) */ + return MTD_FAIL; + } + } + + /* Enable AN and set speed back to power-on default in case previously forced + Only do it if forced, to avoid an extra/unnecessary soft reset */ + ATTEMPT(mtdGetForcedSpeed(devPtr, port, &speedForced, &dummy)); + if (speedForced) { + ATTEMPT(mtdUndoForcedSpeed(devPtr, port, MTD_FALSE)); + } + + if (speed_bits == MTD_ADV_NONE) { + /* Set all speeds to be disabled + Take care of bits in 7.0010 (advertisement register, 10BT and 100BT bits) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010,\ + MTD_7_0010_SPEED_BIT_POS, MTD_7_0010_SPEED_BIT_LENGTH, \ + 0)); + + /* Take care of speed bits in 7.8000 (1000BASE-T speed bits) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x8000,\ + MTD_7_8000_SPEED_BIT_POS, MTD_7_8000_SPEED_BIT_LENGTH, \ + 0)); + + /* Now take care of bit in 7.0020 (10GBASE-T) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0020,\ + MTD_7_0020_SPEED_BIT_POS, MTD_7_0020_SPEED_BIT_LENGTH, 0)); + + if (MTD_IS_X33X0_BASE(devPtr->deviceId)) { + /* Now take care of bits in 7.0020 (2.5G, 5G speed bits) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0020,\ + MTD_7_0020_SPEED_BIT_POS2, MTD_7_0020_SPEED_BIT_LENGTH2, 0)); + } + } else { + /* Take care of bits in 7.0010 (advertisement register, 10BT and 100BT bits) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010,\ + MTD_7_0010_SPEED_BIT_POS, MTD_7_0010_SPEED_BIT_LENGTH, \ + (speed_bits & MTD_LOWER_BITS_MASK))); + + /* Take care of speed bits in 7.8000 (1000BASE-T speed bits) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x8000,\ + MTD_7_8000_SPEED_BIT_POS, MTD_7_8000_SPEED_BIT_LENGTH, \ + MTD_GET_1000BT_BITS(speed_bits))); + + + /* Now take care of bits in 7.0020 (10GBASE-T first) */ + ATTEMPT(mtdHwXmdioRead(devPtr, port, 7, 0x0020, &tempRegValue)); + ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_10GBT_BIT(speed_bits),\ + MTD_7_0020_SPEED_BIT_POS, MTD_7_0020_SPEED_BIT_LENGTH, \ + &tempRegValue)); + + if (MTD_IS_X33X0_BASE(devPtr->deviceId)) { + /* Now take care of 2.5G bit in 7.0020 */ + ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_2P5GBT_BIT(speed_bits),\ + 7, 1, \ + &tempRegValue)); + + /* Now take care of 5G bit in 7.0020 */ + ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_5GBT_BIT(speed_bits),\ + 8, 1, \ + &tempRegValue)); + } + + /* Now write result back to 7.0020 */ + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 7, 0x0020, tempRegValue)); + + if (MTD_GET_10GBT_BIT(speed_bits) || + MTD_GET_2P5GBT_BIT(speed_bits) || + MTD_GET_5GBT_BIT(speed_bits)) { + /* Set XNP on if any bit that required it was set */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0, 13, 1, 1)); + } + } + + if (anRestart) { + return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) || + mtdAutonegRestart(devPtr, port))); + } + + return MTD_OK; +} + +MTD_STATUS mtdUndoForcedSpeed( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_BOOL anRestart) +{ + + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 13, 1, 1)); + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 6, 1, 1)); + + /* when speed bits are changed, T unit sw reset is required, wait until phy is ready */ + ATTEMPT(mtdSoftwareReset(devPtr, port, 1000)); + + if (anRestart) { + return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) || + mtdAutonegRestart(devPtr, port))); + } + + return MTD_OK; +} + + +MTD_STATUS mtdGetForcedSpeed( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *speedIsForced, + OUT MTD_U16 *forcedSpeed) +{ + MTD_U16 val, bit0, bit1, forcedSpeedBits, duplexBit; + MTD_BOOL anDisabled; + + *speedIsForced = MTD_FALSE; + *forcedSpeed = MTD_ADV_NONE; + + /* check if 7.0.12 is 0 or 1 (disabled or enabled) */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 7, 0, 12, 1, &val)); + + (val) ? (anDisabled = MTD_FALSE) : (anDisabled = MTD_TRUE); + + if (anDisabled) { + /* autoneg is disabled, see if it's forced to one of the speeds that work without AN */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 6, 1, &bit0)); + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 13, 1, &bit1)); + + /* now read the duplex bit setting */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 7, 0x8000, 4, 1, &duplexBit)); + + forcedSpeedBits = 0; + forcedSpeedBits = bit0 | (bit1 << 1); + + if (forcedSpeedBits == 0) { + /* it's set to 10BT */ + if (duplexBit) { + *speedIsForced = MTD_TRUE; + *forcedSpeed = MTD_SPEED_10M_FD_AN_DIS; + } else { + *speedIsForced = MTD_TRUE; + *forcedSpeed = MTD_SPEED_10M_HD_AN_DIS; + } + } else if (forcedSpeedBits == 2) { + /* it's set to 100BT */ + if (duplexBit) { + *speedIsForced = MTD_TRUE; + *forcedSpeed = MTD_SPEED_100M_FD_AN_DIS; + } else { + *speedIsForced = MTD_TRUE; + *forcedSpeed = MTD_SPEED_100M_HD_AN_DIS; + } + } + /* else it's set to 1000BT or 10GBT which require AN to work */ + } + + return MTD_OK; +} + +MTD_STATUS mtdAutonegRestart( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port) +{ + /* set 7.0.9, restart AN */ + return (mtdHwSetPhyRegField(devPtr, port, 7, 0, + 9, 1, 1)); +} + + +MTD_STATUS mtdAutonegEnable( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port) +{ + /* set 7.0.12=1, enable AN */ + return (mtdHwSetPhyRegField(devPtr, port, 7, 0, + 12, 1, 1)); +} + +/****************************************************************************** + MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *anSpeedResolutionDone + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + anSpeedResolutionDone - one of the following + MTD_TRUE if speed/duplex is resolved + MTD_FALSE if speed/duplex is not resolved + + Returns: + MTD_OK or MTD_FAIL, if query was successful or not + + Description: + Queries register 3.8008.11 Speed/Duplex resolved to see if autonegotiation + is resolved or in progress. See note below. This function is only to be + called if autonegotation is enabled and speed is not forced. + + anSpeedResolutionDone being MTD_TRUE, only indicates if AN has determined + the speed and duplex bits in 3.8008, which will indicate what registers + to read later for AN resolution after AN has completed. + + Side effects: + None + + Notes/Warnings: + If autonegotiation is disabled or speed is forced, this function returns + MTD_TRUE. + +******************************************************************************/ +MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *anSpeedResolutionDone) +{ + MTD_U16 val; + + /* read speed/duplex resolution done bit in 3.8008 bit 11 */ + if (mtdHwGetPhyRegField(devPtr, port, + 3, 0x8008, 11, 1, &val) == MTD_FAIL) { + *anSpeedResolutionDone = MTD_FALSE; + return MTD_FAIL; + } + + (val) ? (*anSpeedResolutionDone = MTD_TRUE) : (*anSpeedResolutionDone = MTD_FALSE); + + return MTD_OK; +} + + +MTD_STATUS mtdGetAutonegSpeedDuplexResolution( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speedResolution) +{ + MTD_U16 val, speed, speed2, duplex; + MTD_BOOL resDone; + + *speedResolution = MTD_ADV_NONE; + + /* check if AN is enabled */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \ + 7, 0, 12, 1, &val)); + + if (val) { + /* an is enabled, check if speed is resolved */ + ATTEMPT(mtdAutonegIsSpeedDuplexResolutionDone(devPtr, port, &resDone)); + + if (resDone) { + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \ + 3, 0x8008, 14, 2, &speed)); + + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \ + 3, 0x8008, 13, 1, &duplex)); + + switch (speed) { + case MTD_CU_SPEED_10_MBPS: + if (duplex) { + *speedResolution = MTD_SPEED_10M_FD; + } else { + *speedResolution = MTD_SPEED_10M_HD; + } + break; + case MTD_CU_SPEED_100_MBPS: + if (duplex) { + *speedResolution = MTD_SPEED_100M_FD; + } else { + *speedResolution = MTD_SPEED_100M_HD; + } + break; + case MTD_CU_SPEED_1000_MBPS: + if (duplex) { + *speedResolution = MTD_SPEED_1GIG_FD; + } else { + *speedResolution = MTD_SPEED_1GIG_HD; + } + break; + case MTD_CU_SPEED_10_GBPS: /* also MTD_CU_SPEED_NBT */ + if (MTD_IS_X32X0_BASE(devPtr->deviceId)) { + *speedResolution = MTD_SPEED_10GIG_FD; /* 10G has only full duplex, ignore duplex bit */ + } else { + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \ + 3, 0x8008, 2, 2, &speed2)); + + switch (speed2) { + case MTD_CU_SPEED_NBT_10G: + *speedResolution = MTD_SPEED_10GIG_FD; + break; + + case MTD_CU_SPEED_NBT_5G: + *speedResolution = MTD_SPEED_5GIG_FD; + break; + + case MTD_CU_SPEED_NBT_2P5G: + *speedResolution = MTD_SPEED_2P5GIG_FD; + break; + + default: + /* this is an error */ + return MTD_FAIL; + break; + } + } + break; + default: + /* this is an error */ + return MTD_FAIL; + break; + } + + } + + } + + return MTD_OK; +} + +MTD_STATUS mtdSetPauseAdvertisement( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U32 pauseType, + IN MTD_BOOL anRestart) +{ + /* sets/clears bits 11, 10 (A6,A5 in the tech bit field of 7.16) */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010, \ + 10, 2, (MTD_U16)pauseType)); + + if (anRestart) { + return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) || + mtdAutonegRestart(devPtr, port))); + } + + return MTD_OK; +} + + +/****************************************************************************** + MTD_STATUS mtdAutonegIsCompleted + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *anStatusReady + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + anStatusReady - one of the following + MTD_TRUE if AN status registers are available to be read (7.1, 7.33, 7.32769, etc.) + MTD_FALSE if AN is not completed and AN status registers may contain old data + + Returns: + MTD_OK or MTD_FAIL, if query was successful or not + + Description: + Checks 7.1.5 for 1. If 1, returns MTD_TRUE. If not, returns MTD_FALSE. Many + autonegotiation status registers are not valid unless AN has completed + meaning 7.1.5 = 1. + + Side effects: + None + + Notes/Warnings: + Call this function before reading 7.33 or 7.32769 to check for master/slave + resolution or other negotiated parameters which are negotiated during + autonegotiation like fast retrain, fast retrain type, etc. + +******************************************************************************/ +MTD_STATUS mtdAutonegIsCompleted( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *anStatusReady) +{ + MTD_U16 val; + + /* read an completed, 7.1.5 bit */ + if (mtdHwGetPhyRegField(devPtr, port, + 7, 1, 5, 1, &val) == MTD_FAIL) { + *anStatusReady = MTD_FALSE; + return MTD_FAIL; + } + + (val) ? (*anStatusReady = MTD_TRUE) : (*anStatusReady = MTD_FALSE); + + return MTD_OK; +} + + +MTD_STATUS mtdGetLPAdvertisedPause( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *pauseBits) +{ + MTD_U16 val; + MTD_BOOL anStatusReady; + + /* Make sure AN is complete */ + ATTEMPT(mtdAutonegIsCompleted(devPtr, port, &anStatusReady)); + + if (anStatusReady == MTD_FALSE) { + *pauseBits = MTD_CLEAR_PAUSE; + return MTD_FAIL; + } + + /* get bits 11, 10 (A6,A5 in the tech bit field of 7.19) */ + if (mtdHwGetPhyRegField(devPtr, port, 7, 19, + 10, 2, &val) == MTD_FAIL) { + *pauseBits = MTD_CLEAR_PAUSE; + return MTD_FAIL; + } + + *pauseBits = (MTD_U8)val; + + return MTD_OK; +} + +/******************************************************************* + Firmware Version + *******************************************************************/ +/****************************************************************************/ +MTD_STATUS mtdGetFirmwareVersion( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *major, + OUT MTD_U8 *minor, + OUT MTD_U8 *inc, + OUT MTD_U8 *test) +{ + MTD_U16 reg_49169, reg_49170; + + ATTEMPT(mtdHwXmdioRead(devPtr, port, 1, 49169, ®_49169)); + + *major = (reg_49169 & 0xFF00) >> 8; + *minor = (reg_49169 & 0x00FF); + + ATTEMPT(mtdHwXmdioRead(devPtr, port, 1, 49170, ®_49170)); + + *inc = (reg_49170 & 0xFF00) >> 8; + *test = (reg_49170 & 0x00FF); + + /* firmware is not running if all 0's */ + if (!(*major || *minor || *inc || *test)) { + return MTD_FAIL; + } + return MTD_OK; +} + + +MTD_STATUS mtdGetPhyRevision( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_DEVICE_ID * phyRev, + OUT MTD_U8 *numPorts, + OUT MTD_U8 *thisPort) +{ + MTD_U16 temp = 0, tryCounter, temp2, baseType, reportedHwRev; + MTD_U16 revision = 0, numports, thisport, readyBit, fwNumports, fwThisport; + MTD_BOOL registerExists, regReady, hasMacsec, hasCopper, isE20X0Device; + MTD_U8 major, minor, inc, test; + + *phyRev = MTD_REV_UNKNOWN; /* in case we have any failed ATTEMPT below, will return unknown */ + *numPorts = 0; + *thisPort = 0; + + /* first check base type of device, get reported rev and port info */ + ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0xD00D, &temp)); + baseType = ((temp & 0xFC00) >> 6); + reportedHwRev = (temp & 0x000F); + numports = ((temp & 0x0380) >> 7) + 1; + thisport = ((temp & 0x0070) >> 4); + + /* find out if device has macsec/ptp, copper unit or is an E20X0-type device */ + ATTEMPT(mtdCheckDeviceCapabilities(devPtr, port, &hasMacsec, &hasCopper, &isE20X0Device)); + + /* check if internal processor firmware is up and running, and if so, easier to get info */ + if (mtdGetFirmwareVersion(devPtr, port, &major, &minor, &inc, &test) == MTD_FAIL) { + major = minor = inc = test = 0; /* this is expected if firmware is not loaded/running */ + } + + if (major == 0 && minor == 0 && inc == 0 && test == 0) { + /* no firmware running, have to verify device revision */ + if (MTD_IS_X32X0_BASE(baseType)) { + /* A0 and Z2 report the same revision, need to check which is which */ + if (reportedHwRev == 1) { + /* need to figure out if it's A0 or Z2 */ + /* remove internal reset */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 3, 0xD801, 5, 1, 1)); + + /* wait until it's ready */ + regReady = MTD_FALSE; + tryCounter = 0; + while (regReady == MTD_FALSE && tryCounter++ < 10) { + ATTEMPT(mtdWait(1)); /* timeout is set to 10 ms */ + ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0xD007, 6, 1, &readyBit)); + if (readyBit == 1) { + regReady = MTD_TRUE; + } + } + + if (regReady == MTD_FALSE) { + /* timed out, can't tell for sure what rev this is */ + *numPorts = 0; + *thisPort = 0; + *phyRev = MTD_REV_UNKNOWN; + return MTD_FAIL; + } + + /* perform test */ + registerExists = MTD_FALSE; + ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0x8EC6, &temp)); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, 3, 0x8EC6, 0xA5A5)); + ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0x8EC6, &temp2)); + + /* put back internal reset */ + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 3, 0xD801, 5, 1, 0)); + + if (temp == 0 && temp2 == 0xA5A5) { + registerExists = MTD_TRUE; + } + + if (registerExists == MTD_TRUE) { + revision = 2; /* this is actually QA0 */ + } else { + revision = reportedHwRev; /* this is a QZ2 */ + } + + } else { + /* it's not A0 or Z2, use what's reported by the hardware */ + revision = reportedHwRev; + } + } else if (MTD_IS_X33X0_BASE(baseType)) { + /* all 33X0 devices report correct revision */ + revision = reportedHwRev; + } + + /* have to use what's reported by the hardware */ + *numPorts = (MTD_U8)numports; + *thisPort = (MTD_U8)thisport; + } else { + /* there is firmware loaded/running in internal processor */ + /* can get device revision reported by firmware */ + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_PHY_REV_INFO_REG, &temp)); + ATTEMPT(mtdHwGetRegFieldFromWord(temp, 0, 4, &revision)); + ATTEMPT(mtdHwGetRegFieldFromWord(temp, 4, 3, &fwNumports)); + ATTEMPT(mtdHwGetRegFieldFromWord(temp, 7, 3, &fwThisport)); + if (fwNumports == numports && fwThisport == thisport) { + *numPorts = (MTD_U8)numports; + *thisPort = (MTD_U8)thisport; + } else { + *phyRev = MTD_REV_UNKNOWN; + *numPorts = 0; + *thisPort = 0; + return MTD_FAIL; /* firmware and hardware are reporting different values */ + } + } + + /* now have correct information to build up the MTD_DEVICE_ID */ + if (MTD_IS_X32X0_BASE(baseType)) { + temp = MTD_X32X0_BASE; + } else if (MTD_IS_X33X0_BASE(baseType)) { + temp = MTD_X33X0_BASE; + } else { + *phyRev = MTD_REV_UNKNOWN; + *numPorts = 0; + *thisPort = 0; + return MTD_FAIL; + } + + if (hasMacsec) { + temp |= MTD_MACSEC_CAPABLE; + } + + if (hasCopper) { + temp |= MTD_COPPER_CAPABLE; + } + + if (MTD_IS_X33X0_BASE(baseType) && isE20X0Device) { + temp |= MTD_E20X0_DEVICE; + } + + temp |= (revision & 0xF); + + *phyRev = (MTD_DEVICE_ID)temp; + + /* make sure we got a good one */ + if (mtdIsPhyRevisionValid(*phyRev) == MTD_OK) { + return MTD_OK; + } else { + return MTD_FAIL; /* unknown or unsupported, if recognized but unsupported, value is still valid */ + } +} + +MTD_STATUS mtdIsPhyRevisionValid(IN MTD_DEVICE_ID phyRev) +{ + switch (phyRev) { + /* list must match MTD_DEVICE_ID */ + case MTD_REV_3240P_Z2: + case MTD_REV_3240P_A0: + case MTD_REV_3240P_A1: + case MTD_REV_3220P_Z2: + case MTD_REV_3220P_A0: + + case MTD_REV_3240_Z2: + case MTD_REV_3240_A0: + case MTD_REV_3240_A1: + case MTD_REV_3220_Z2: + case MTD_REV_3220_A0: + + case MTD_REV_3310P_A0: + case MTD_REV_3320P_A0: + case MTD_REV_3340P_A0: + case MTD_REV_3310_A0: + case MTD_REV_3320_A0: + case MTD_REV_3340_A0: + + case MTD_REV_E2010P_A0: + case MTD_REV_E2020P_A0: + case MTD_REV_E2040P_A0: + case MTD_REV_E2010_A0: + case MTD_REV_E2020_A0: + case MTD_REV_E2040_A0: + + case MTD_REV_2340P_A1: + case MTD_REV_2320P_A0: + case MTD_REV_2340_A1: + case MTD_REV_2320_A0: + return MTD_OK; + break; + + /* unsupported PHYs */ + case MTD_REV_3310P_Z1: + case MTD_REV_3320P_Z1: + case MTD_REV_3340P_Z1: + case MTD_REV_3310_Z1: + case MTD_REV_3320_Z1: + case MTD_REV_3340_Z1: + + case MTD_REV_3310P_Z2: + case MTD_REV_3320P_Z2: + case MTD_REV_3340P_Z2: + case MTD_REV_3310_Z2: + case MTD_REV_3320_Z2: + case MTD_REV_3340_Z2: + + + case MTD_REV_E2010P_Z2: + case MTD_REV_E2020P_Z2: + case MTD_REV_E2040P_Z2: + case MTD_REV_E2010_Z2: + case MTD_REV_E2020_Z2: + case MTD_REV_E2040_Z2: + default: + return MTD_FAIL; /* is either MTD_REV_UNKNOWN or not in the above list */ + break; + } +} + +/* mtdCunit.c */ +MTD_STATUS mtdCunitSwReset( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port) +{ + return mtdHwSetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 15, 1, 1); +} + +/* mtdHxunit.c */ +MTD_STATUS mtdRerunSerdesAutoInitializationUseAutoMode( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port) +{ + MTD_U16 temp, temp2, temp3; + MTD_U16 waitCounter; + + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp)); + + ATTEMPT(mtdHwSetRegFieldToWord(temp, 3, 14, 2, &temp2)); /* execute bits and disable bits set */ + + ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, temp2)); + + /* wait for it to be done */ + waitCounter = 0; + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp3)); + while ((temp3 & 0x8000) && (waitCounter < 100)) { + ATTEMPT(mtdWait(1)); + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp3)); + waitCounter++; + } + + /* if speed changed, let it stay. that's the speed that it ended up changing to/serdes was initialied to */ + if (waitCounter >= 100) { + return MTD_FAIL; /* execute timed out */ + } + + return MTD_OK; +} + + +/* mtdHunit.c */ +/****************************************************************************** + Mac Interface functions +******************************************************************************/ + +MTD_STATUS mtdSetMacInterfaceControl( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 macType, + IN MTD_BOOL macIfPowerDown, + IN MTD_U16 macIfSnoopSel, + IN MTD_U16 macIfActiveLaneSelect, + IN MTD_U16 macLinkDownSpeed, + IN MTD_U16 macMaxIfSpeed, /* 33X0/E20X0 devices only */ + IN MTD_BOOL doSwReset, + IN MTD_BOOL rerunSerdesInitialization) +{ + MTD_U16 cunitPortCtrl, cunitModeConfig; + + /* do range checking on parameters */ + if ((macType > MTD_MAC_LEAVE_UNCHANGED)) { + return MTD_FAIL; + } + + if ((macIfSnoopSel > MTD_MAC_SNOOP_LEAVE_UNCHANGED) || + (macIfSnoopSel == 1)) { + return MTD_FAIL; + } + + if (macIfActiveLaneSelect > 1) { + return MTD_FAIL; + } + + if (macLinkDownSpeed > MTD_MAC_SPEED_LEAVE_UNCHANGED) { + return MTD_FAIL; + } + + if (!(macMaxIfSpeed == MTD_MAX_MAC_SPEED_10G || + macMaxIfSpeed == MTD_MAX_MAC_SPEED_5G || + macMaxIfSpeed == MTD_MAX_MAC_SPEED_2P5G || + macMaxIfSpeed == MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED || + macMaxIfSpeed == MTD_MAX_MAC_SPEED_NOT_APPLICABLE)) { + return MTD_FAIL; + } + + + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, &cunitPortCtrl)); + ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_MODE_CONFIG, &cunitModeConfig)); + + /* Because writes of some of these bits don't show up in the register on a read + * until after the software reset, we can't do repeated read-modify-writes + * to the same register or we will lose those changes. + + * This approach also cuts down on IO and speeds up the code + */ + + if (macType < MTD_MAC_LEAVE_UNCHANGED) { + ATTEMPT(mtdHwSetRegFieldToWord(cunitPortCtrl, macType, 0, 3, &cunitPortCtrl)); + } + + ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, (MTD_U16)macIfPowerDown, 3, 1, &cunitModeConfig)); + + if (macIfSnoopSel < MTD_MAC_SNOOP_LEAVE_UNCHANGED) { + ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macIfSnoopSel, 8, 2, &cunitModeConfig)); + } + + ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macIfActiveLaneSelect, 10, 1, &cunitModeConfig)); + + if (macLinkDownSpeed < MTD_MAC_SPEED_LEAVE_UNCHANGED) { + ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macLinkDownSpeed, 6, 2, &cunitModeConfig)); + } + + /* Now write changed values */ + ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, cunitPortCtrl)); + ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_MODE_CONFIG, cunitModeConfig)); + + if (MTD_IS_X33X0_BASE(devPtr->deviceId)) { + if (macMaxIfSpeed != MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED) { + ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 31, 0xF0A8, 0, 2, macMaxIfSpeed)); + } + } + + if (doSwReset == MTD_TRUE) { + ATTEMPT(mtdCunitSwReset(devPtr, port)); + + if (macLinkDownSpeed < MTD_MAC_SPEED_LEAVE_UNCHANGED) { + ATTEMPT(mtdCunitSwReset(devPtr, port)); /* need 2x for changes to macLinkDownSpeed */ + } + + if (rerunSerdesInitialization == MTD_TRUE) { + ATTEMPT(mtdRerunSerdesAutoInitializationUseAutoMode(devPtr, port)); + } + } + + return MTD_OK; +} + + +/******************************************************************************* +* mtdSemCreate +* +* DESCRIPTION: +* Create semaphore. +* +* INPUTS: +* state - beginning state of the semaphore, either MTD_SEM_EMPTY or MTD_SEM_FULL +* +* OUTPUTS: +* None +* +* RETURNS: +* MTD_SEM if success. Otherwise, NULL +* +* COMMENTS: +* None +* +*******************************************************************************/ +MTD_SEM mtdSemCreate( + IN MTD_DEV * dev, + IN MTD_SEM_BEGIN_STATE state) +{ + if (dev->semCreate) + return dev->semCreate(state); + + return 1; /* should return any value other than 0 to let it keep going */ +} + +MTD_STATUS mtdLoadDriver( + IN FMTD_READ_MDIO readMdio, + IN FMTD_WRITE_MDIO writeMdio, + IN MTD_BOOL macsecIndirectAccess, + IN FMTD_SEM_CREATE semCreate, + IN FMTD_SEM_DELETE semDelete, + IN FMTD_SEM_TAKE semTake, + IN FMTD_SEM_GIVE semGive, + IN MTD_U16 anyPort, + OUT MTD_DEV * dev) +{ + MTD_U16 data; + + MTD_DBG_INFO("mtdLoadDriver Called.\n"); + + /* Check for parameters validity */ + if (dev == NULL) { + MTD_DBG_ERROR("MTD_DEV pointer is NULL.\n"); + return MTD_API_ERR_DEV; + } + + /* The initialization was already done. */ + if (dev->devEnabled) { + MTD_DBG_ERROR("Device Driver already loaded.\n"); + return MTD_API_ERR_DEV_ALREADY_EXIST; + } + + /* Make sure mtdWait() was implemented */ + if (mtdWait(1) == MTD_FAIL) { + MTD_DBG_ERROR("mtdWait() not implemented.\n"); + return MTD_FAIL; + } + + dev->fmtdReadMdio = readMdio; + dev->fmtdWriteMdio = writeMdio; + + dev->semCreate = semCreate; + dev->semDelete = semDelete; + dev->semTake = semTake; + dev->semGive = semGive; + dev->macsecIndirectAccess = macsecIndirectAccess; /* 88X33X0 and later force direct access */ + + /* try to read 1.0 */ + if ((mtdHwXmdioRead(dev, anyPort, 1, 0, &data)) != MTD_OK) { + MTD_DBG_ERROR("Reading to reg %x failed.\n", 0); + return MTD_API_FAIL_READ_REG; + } + + MTD_DBG_INFO("mtdLoadDriver successful.\n"); + + /* Initialize the MACsec Register Access semaphore. */ + dev->multiAddrSem = mtdSemCreate(dev, MTD_SEM_FULL); + if (dev->multiAddrSem == 0) { + MTD_DBG_ERROR("semCreate Failed.\n"); + return MTD_API_FAIL_SEM_CREATE; + } + + if (dev->msec_ctrl.msec_rev == MTD_MSEC_REV_FPGA) { + dev->deviceId = MTD_REV_3310P_Z2; /* verification: change if needed */ + dev->numPorts = 1; /* verification: change if needed */ + dev->thisPort = 0; + } else { + /* After everything else is done, can fill in the device id */ + if ((mtdGetPhyRevision(dev, anyPort, + &(dev->deviceId), + &(dev->numPorts), + &(dev->thisPort))) != MTD_OK) { + MTD_DBG_ERROR("mtdGetPhyRevision Failed.\n"); + return MTD_FAIL; + } + } + + if (MTD_IS_X33X0_BASE(dev->deviceId)) { + dev->macsecIndirectAccess = MTD_FALSE; /* bug was fixed in 88X33X0 and later revisions, go direct */ + } + + dev->devEnabled = MTD_TRUE; + + MTD_DBG_INFO("mtdLoadDriver successful !!!.\n"); + + return MTD_OK; +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h new file mode 100644 index 000000000000..1c5daae94a54 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h @@ -0,0 +1,1540 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + */ + +#ifndef _TXGBE_MTD_H_ +#define _TXGBE_MTD_H_ + +#define C_LINKAGE 1 /* set to 1 if C compile/linkage on C files is desired with C++ */ + +#if C_LINKAGE +#if defined __cplusplus + extern "C" { +#endif +#endif + +/* general */ + +#undef IN +#define IN +#undef OUT +#define OUT +#undef INOUT +#define INOUT + +#ifndef NULL +#define NULL ((void *)0) +#endif + +typedef void MTD_VOID; +typedef char MTD_8; +typedef short MTD_16; +typedef long MTD_32; +typedef long long MTD_64; + +typedef unsigned char MTD_U8; +typedef unsigned short MTD_U16; +typedef unsigned long MTD_U32; +typedef unsigned int MTD_UINT; +typedef int MTD_INT; +typedef signed short MTD_S16; + +typedef unsigned long long MTD_U64; + +typedef enum { + MTD_FALSE = 0, + MTD_TRUE = 1 +} MTD_BOOL; + +#define MTD_CONVERT_BOOL_TO_UINT(boolVar, uintVar) \ + {(boolVar) ? (uintVar = 1) : (uintVar = 0); } +#define MTD_CONVERT_UINT_TO_BOOL(uintVar, boolVar) \ + {(uintVar) ? (boolVar = MTD_TRUE) : (boolVar = MTD_FALSE); } +#define MTD_GET_BOOL_AS_BIT(boolVar) ((boolVar) ? 1 : 0) +#define MTD_GET_BIT_AS_BOOL(uintVar) ((uintVar) ? MTD_TRUE : MTD_FALSE) + +typedef void (*MTD_VOIDFUNCPTR) (void); /* ptr to function returning void */ +typedef MTD_U32 (*MTD_INTFUNCPTR) (void); /* ptr to function returning int */ + +typedef MTD_U32 MTD_STATUS; + +/* Defines for semaphore support */ +typedef MTD_U32 MTD_SEM; + +typedef enum { + MTD_SEM_EMPTY, + MTD_SEM_FULL +} MTD_SEM_BEGIN_STATE; + +typedef MTD_SEM (*FMTD_SEM_CREATE)(MTD_SEM_BEGIN_STATE state); +typedef MTD_STATUS (*FMTD_SEM_DELETE)(MTD_SEM semId); +typedef MTD_STATUS (*FMTD_SEM_TAKE)(MTD_SEM semId, MTD_U32 timOut); +typedef MTD_STATUS (*FMTD_SEM_GIVE)(MTD_SEM semId); + +/* Defines for mtdLoadDriver() mtdUnloadDriver() and all API functions which need MTD_DEV */ +typedef struct _MTD_DEV MTD_DEV; +typedef MTD_DEV * MTD_DEV_PTR; + +typedef MTD_STATUS (*FMTD_READ_MDIO)( + MTD_DEV *dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 *value); +typedef MTD_STATUS (*FMTD_WRITE_MDIO)( + MTD_DEV *dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 value); + +/* MTD_DEVICE_ID format: */ +/* Bits 15:13 reserved */ +/* Bit 12: 1-> E20X0 device with max speed of 5G and no fiber interface */ +/* Bit 11: 1-> Macsec Capable (Macsec/PTP module included */ +/* Bit 10: 1-> Copper Capable (T unit interface included) */ +/* Bits 9:4 0x18 -> X32X0 base, 0x1A 0x33X0 base */ +/* Bits 3:0 revision/number of ports indication, see list */ +/* Following defines are for building MTD_DEVICE_ID */ +#define MTD_E20X0_DEVICE (1<<12) /* whether this is an E20X0 device group */ +#define MTD_MACSEC_CAPABLE (1<<11) /* whether the device has a Macsec/PTP module */ +#define MTD_COPPER_CAPABLE (1<<10) /* whether the device has a copper (T unit) module */ +#define MTD_X32X0_BASE (0x18<<4) /* whether the device uses X32X0 firmware base */ +#define MTD_X33X0_BASE (0x1A<<4) /* whether the device uses X33X0 firmware base */ + +/* Following macros are to test MTD_DEVICE_ID for various features */ +#define MTD_IS_E20X0_DEVICE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_E20X0_DEVICE)) +#define MTD_IS_MACSEC_CAPABLE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_MACSEC_CAPABLE)) +#define MTD_IS_COPPER_CAPABLE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_COPPER_CAPABLE)) +#define MTD_IS_X32X0_BASE(mTdrevId) ((MTD_BOOL)((mTdrevId & (0x3F<<4)) == MTD_X32X0_BASE)) +#define MTD_IS_X33X0_BASE(mTdrevId) ((MTD_BOOL)((mTdrevId & (0x3F<<4)) == MTD_X33X0_BASE)) + +#define MTD_X33X0BASE_SINGLE_PORTA0 0xA +#define MTD_X33X0BASE_DUAL_PORTA0 0x6 +#define MTD_X33X0BASE_QUAD_PORTA0 0x2 + +/* WARNING: If you add/modify this list, you must also modify mtdIsPhyRevisionValid() */ +typedef enum { + MTD_REV_UNKNOWN = 0, + MTD_REV_3240P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x1), + MTD_REV_3240P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x2), + MTD_REV_3240P_A1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x3), + MTD_REV_3220P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x4), + MTD_REV_3220P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x5), + MTD_REV_3240_Z2 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x1), + MTD_REV_3240_A0 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x2), + MTD_REV_3240_A1 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x3), + MTD_REV_3220_Z2 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x4), + MTD_REV_3220_A0 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x5), + + MTD_REV_3310P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x8), /* 88X33X0 Z1 not supported starting with version 1.2 of API */ + MTD_REV_3320P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x4), + MTD_REV_3340P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x0), + MTD_REV_3310_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x8), + MTD_REV_3320_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x4), + MTD_REV_3340_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x0), + + MTD_REV_3310P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), /* 88X33X0 Z2 not supported starting with version 1.2 of API */ + MTD_REV_3320P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5), + MTD_REV_3340P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1), + MTD_REV_3310_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), + MTD_REV_3320_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5), + MTD_REV_3340_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1), + + MTD_REV_E2010P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), /* E20X0 Z2 not supported starting with version 1.2 of API */ + MTD_REV_E2020P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5), + MTD_REV_E2040P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1), + MTD_REV_E2010_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), + MTD_REV_E2020_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5), + MTD_REV_E2040_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1), + + + MTD_REV_3310P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0), + MTD_REV_3320P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0), + MTD_REV_3340P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0), + MTD_REV_3310_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0), + MTD_REV_3320_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0), + MTD_REV_3340_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0), + + MTD_REV_E2010P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0), + MTD_REV_E2020P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0), + MTD_REV_E2040P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0), + MTD_REV_E2010_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0), + MTD_REV_E2020_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0), + MTD_REV_E2040_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0), + + MTD_REV_2340P_A1 = (MTD_MACSEC_CAPABLE | MTD_X32X0_BASE | 0x3), + MTD_REV_2320P_A0 = (MTD_MACSEC_CAPABLE | MTD_X32X0_BASE | 0x5), + MTD_REV_2340_A1 = (MTD_X32X0_BASE | 0x3), + MTD_REV_2320_A0 = (MTD_X32X0_BASE | 0x5) +} MTD_DEVICE_ID; + +typedef enum { + MTD_MSEC_REV_Z0A, + MTD_MSEC_REV_Y0A, + MTD_MSEC_REV_A0B, + MTD_MSEC_REV_FPGA, + MTD_MSEC_REV_UNKNOWN = -1 +} MTD_MSEC_REV; + +/* compatible for USB test */ +typedef struct _MTD_MSEC_CTRL { + MTD_32 dev_num; /* indicates the device number (0 if only one) when multiple devices are present on SVB.*/ + MTD_32 port_num; /* Indicates which port (0 to 4) is requesting CPU */ + MTD_U16 prev_addr; /* < Prev write address */ + MTD_U16 prev_dataL; /* < Prev dataL value */ + MTD_MSEC_REV msec_rev; /* revision */ +} MTD_MSEC_CTRL; + +struct _MTD_DEV { + MTD_DEVICE_ID deviceId; /* type of device and capabilities */ + MTD_BOOL devEnabled; /* whether mtdLoadDriver() called successfully */ + MTD_U8 numPorts; /* number of ports per device */ + MTD_U8 thisPort; /* relative port number on this device starting with 0 (not MDIO address) */ + MTD_SEM multiAddrSem; + + FMTD_READ_MDIO fmtdReadMdio; + FMTD_WRITE_MDIO fmtdWriteMdio; + + FMTD_SEM_CREATE semCreate; /* create semapore */ + FMTD_SEM_DELETE semDelete; /* delete the semapore */ + FMTD_SEM_TAKE semTake; /* try to get a semapore */ + FMTD_SEM_GIVE semGive; /* return semaphore */ + + MTD_U8 macsecIndirectAccess; /* if MTD_TRUE use internal processor to access Macsec */ + MTD_MSEC_CTRL msec_ctrl; /* structure use for internal verification */ + + void *appData; /* application specific data, anything the host wants to pass to the low layer */ +}; + +#define MTD_OK 0 /* Operation succeeded */ +#define MTD_FAIL 1 /* Operation failed */ +#define MTD_PENDING 2 /* Pending */ + +/* bit definition */ +#define MTD_BIT_0 0x0001 +#define MTD_BIT_1 0x0002 +#define MTD_BIT_2 0x0004 +#define MTD_BIT_3 0x0008 +#define MTD_BIT_4 0x0010 +#define MTD_BIT_5 0x0020 +#define MTD_BIT_6 0x0040 +#define MTD_BIT_7 0x0080 +#define MTD_BIT_8 0x0100 +#define MTD_BIT_9 0x0200 +#define MTD_BIT_10 0x0400 +#define MTD_BIT_11 0x0800 +#define MTD_BIT_12 0x1000 +#define MTD_BIT_13 0x2000 +#define MTD_BIT_14 0x4000 +#define MTD_BIT_15 0x8000 + +#define MTD_DBG_ERROR(...) +#define MTD_DBG_INFO(...) +#define MTD_DBG_CRITIC_INFO(...) + + +#define MTD_API_MAJOR_VERSION 2 +#define MTD_API_MINOR_VERSION 0 + +/* This macro is handy for calling a function when you want to test the + return value and return MTD_FAIL, if the function returned MTD_FAIL, + otherwise continue */ +#define ATTEMPT(xFuncToTry) do {if (xFuncToTry == MTD_FAIL) { return MTD_FAIL; } } while (0) + +/* These defines are used for some registers which represent the copper + speed as a 2-bit binary number */ +#define MTD_CU_SPEED_10_MBPS 0 /* copper is 10BASE-T */ +#define MTD_CU_SPEED_100_MBPS 1 /* copper is 100BASE-TX */ +#define MTD_CU_SPEED_1000_MBPS 2 /* copper is 1000BASE-T */ +#define MTD_CU_SPEED_10_GBPS 3 /* copper is 10GBASE-T */ + +/* for 88X33X0 family: */ +#define MTD_CU_SPEED_NBT 3 /* copper is NBASE-T */ +#define MTD_CU_SPEED_NBT_10G 0 /* copper is 10GBASE-T */ +#define MTD_CU_SPEED_NBT_5G 2 /* copper is 5GBASE-T */ +#define MTD_CU_SPEED_NBT_2P5G 1 /* copper is 2.5GBASE-T */ + +#define MTD_ADV_NONE 0x0000 /* No speeds to be advertised */ +#define MTD_SPEED_10M_HD 0x0001 /* 10BT half-duplex */ +#define MTD_SPEED_10M_FD 0x0002 /* 10BT full-duplex */ +#define MTD_SPEED_100M_HD 0x0004 /* 100BASE-TX half-duplex */ +#define MTD_SPEED_100M_FD 0x0008 /* 100BASE-TX full-duplex */ +#define MTD_SPEED_1GIG_HD 0x0010 /* 1000BASE-T half-duplex */ +#define MTD_SPEED_1GIG_FD 0x0020 /* 1000BASE-T full-duplex */ +#define MTD_SPEED_10GIG_FD 0x0040 /* 10GBASE-T full-duplex */ +#define MTD_SPEED_2P5GIG_FD 0x0800 /* 2.5GBASE-T full-duplex, 88X33X0/88E20X0 family only */ +#define MTD_SPEED_5GIG_FD 0x1000 /* 5GBASE-T full-duplex, 88X33X0/88E20X0 family only */ +#define MTD_SPEED_ALL (MTD_SPEED_10M_HD | \ + MTD_SPEED_10M_FD | \ + MTD_SPEED_100M_HD | \ + MTD_SPEED_100M_FD | \ + MTD_SPEED_1GIG_HD | \ + MTD_SPEED_1GIG_FD | \ + MTD_SPEED_10GIG_FD) +#define MTD_SPEED_ALL_33X0 (MTD_SPEED_10M_HD | \ + MTD_SPEED_10M_FD | \ + MTD_SPEED_100M_HD | \ + MTD_SPEED_100M_FD | \ + MTD_SPEED_1GIG_HD | \ + MTD_SPEED_1GIG_FD | \ + MTD_SPEED_10GIG_FD | \ + MTD_SPEED_2P5GIG_FD |\ + MTD_SPEED_5GIG_FD) + +/* these bits are for forcing the speed and disabling autonegotiation */ +#define MTD_SPEED_10M_HD_AN_DIS 0x0080 /* Speed forced to 10BT half-duplex */ +#define MTD_SPEED_10M_FD_AN_DIS 0x0100 /* Speed forced to 10BT full-duplex */ +#define MTD_SPEED_100M_HD_AN_DIS 0x0200 /* Speed forced to 100BT half-duplex */ +#define MTD_SPEED_100M_FD_AN_DIS 0x0400 /* Speed forced to 100BT full-duplex */ + +/* this value is returned for the speed when the link status is checked and the speed has been */ +/* forced to one speed but the link is up at a different speed. it indicates an error. */ +#define MTD_SPEED_MISMATCH 0x8000 /* Speed is forced to one speed, but status indicates another */ + + +/* for macType */ +#define MTD_MAC_TYPE_RXAUI_SGMII_AN_EN (0x0) /* X32X0/X33x0, but not E20x0 */ +#define MTD_MAC_TYPE_RXAUI_SGMII_AN_DIS (0x1) /* X32x0/X3340/X3320, but not X3310/E20x0 */ +#define MTD_MAC_TYPE_XAUI_RATE_ADAPT (0x1) /* X3310,E2010 only */ +#define MTD_MAC_TYPE_RXAUI_RATE_ADAPT (0x2) +#define MTD_MAC_TYPE_XAUI (0x3) /* X3310,E2010 only */ +#define MTD_MAC_TYPE_XFI_SGMII_AN_EN (0x4) /* XFI at 10G, X33x0/E20x0 also use 5GBASE-R/2500BASE-X */ +#define MTD_MAC_TYPE_XFI_SGMII_AN_DIS (0x5) /* XFI at 10G, X33x0/E20x0 also use 5GBASE-R/2500BASE-X */ +#define MTD_MAC_TYPE_XFI_RATE_ADAPT (0x6) +#define MTD_MAC_TYPE_USXGMII (0x7) /* X33x0 only */ +#define MTD_MAC_LEAVE_UNCHANGED (0x8) /* use this option to not touch these bits */ + +/* for macIfSnoopSel */ +#define MTD_MAC_SNOOP_FROM_NETWORK (0x2) +#define MTD_MAC_SNOOP_FROM_HOST (0x3) +#define MTD_MAC_SNOOP_OFF (0x0) +#define MTD_MAC_SNOOP_LEAVE_UNCHANGED (0x4) /* use this option to not touch these bits */ +/* for macLinkDownSpeed */ +#define MTD_MAC_SPEED_10_MBPS MTD_CU_SPEED_10_MBPS +#define MTD_MAC_SPEED_100_MBPS MTD_CU_SPEED_100_MBPS +#define MTD_MAC_SPEED_1000_MBPS MTD_CU_SPEED_1000_MBPS +#define MTD_MAC_SPEED_10_GBPS MTD_CU_SPEED_10_GBPS +#define MTD_MAC_SPEED_LEAVE_UNCHANGED (0x4) +/* X33X0/E20X0 devices only for macMaxIfSpeed */ +#define MTD_MAX_MAC_SPEED_10G (0) +#define MTD_MAX_MAC_SPEED_5G (2) +#define MTD_MAX_MAC_SPEED_2P5G (3) +#define MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED (4) +#define MTD_MAX_MAC_SPEED_NOT_APPLICABLE (4) /* 32X0 devices can pass this */ + +/* 88X3240/3220 Device Number Definitions */ +#define MTD_T_UNIT_PMA_PMD 1 +#define MTD_T_UNIT_PCS_CU 3 +#define MTD_X_UNIT 3 +#define MTD_H_UNIT 4 +#define MTD_T_UNIT_AN 7 +#define MTD_XFI_DSP 30 +#define MTD_C_UNIT_GENERAL 31 +#define MTD_M_UNIT 31 + +/* 88X3240/3220 Device Number Definitions Host Redundant Mode */ +#define MTD_BASER_LANE_0 MTD_H_UNIT +#define MTD_BASER_LANE_1 MTD_X_UNIT + +/* 88X3240/3220 T Unit Registers MMD 1 */ +#define MTD_TUNIT_IEEE_PMA_CTRL1 0x0000 /* do not enclose in parentheses */ +#define MTD_TUNIT_XG_EXT_STATUS 0xC001 /* do not enclose in parentheses */ +#define MTD_TUNIT_PHY_REV_INFO_REG 0xC04E /* do not enclose in parentheses */ + +/* control/status for serdes initialization */ +#define MTD_SERDES_CTRL_STATUS 0x800F /* do not enclose in parentheses */ +/* 88X3240/3220 C Unit Registers MMD 31 */ +#define MTD_CUNIT_MODE_CONFIG 0xF000 /* do not enclose in parentheses */ +#define MTD_CUNIT_PORT_CTRL 0xF001 /* do not enclose in parentheses */ + +#define MTD_API_FAIL_SEM_CREATE (0x18<<24) /*semCreate Failed. */ +#define MTD_API_FAIL_SEM_DELETE (0x19<<24) /*semDelete Failed. */ +#define MTD_API_FAIL_READ_REG (0x16<<16) /*Reading from phy reg failed. */ +#define MTD_API_ERR_DEV (0x3c<<16) /*driver struture is NULL. */ +#define MTD_API_ERR_DEV_ALREADY_EXIST (0x3e<<16) /*Device Driver already loaded. */ + + +#define MTD_CLEAR_PAUSE 0 /* clears both pause bits */ +#define MTD_SYM_PAUSE 1 /* for symmetric pause only */ +#define MTD_ASYM_PAUSE 2 /* for asymmetric pause only */ +#define MTD_SYM_ASYM_PAUSE 3 /* for both */ + + +/******************************************************************************* + mtdLoadDriver + + DESCRIPTION: + Marvell X32X0 Driver Initialization Routine. + This is the first routine that needs be called by system software. + It takes parameters from system software, and retures a pointer (*dev) + to a data structure which includes infomation related to this Marvell Phy + device. This pointer (*dev) is then used for all the API functions. + The following is the job performed by this routine: + 1. store MDIO read/write function into the given MTD_DEV structure + 2. run any device specific initialization routine + 3. create semaphore if required + 4. Initialize the deviceId + + + INPUTS: + readMdio - pointer to host's function to do MDIO read + writeMdio - point to host's function to do MDIO write + macsecIndirectAccess - MTD_TRUE to access MacSec through T-unit processor + MTD_FALSE to do direct register access + semCreate - pointer to host's function to create a semaphore, NULL + if not used + semDelete - pointer to host's function to create a semaphore, NULL + if not used + semTake - pointer to host's function to take a semaphore, NULL + if not used + semGive - pointer to host's function to give a semaphore, NULL + if not used + anyPort - port address of any port for this device + + OUTPUTS: + dev - pointer to holds device information to be used for each API call. + + RETURNS: + MTD_OK - on success + MTD_FAIL - on error + + COMMENTS: + mtdUnloadDriver is also provided to do driver cleanup. + + An MTD_DEV is required for each type of X32X0 device in the system. For + example, if there are 16 ports of X3240 and 4 ports of X3220, + two MTD_DEV are required, and one call to mtdLoadDriver() must + be made with one of the X3240 ports, and one with one of the X3220 + ports. +*******************************************************************************/ +MTD_STATUS mtdLoadDriver +( + IN FMTD_READ_MDIO readMdio, + IN FMTD_WRITE_MDIO writeMdio, + IN MTD_BOOL macsecIndirectAccess, + IN FMTD_SEM_CREATE semCreate, + IN FMTD_SEM_DELETE semDelete, + IN FMTD_SEM_TAKE semTake, + IN FMTD_SEM_GIVE semGive, + IN MTD_U16 anyPort, + OUT MTD_DEV * dev +); + +/****************************************************************************** +MTD_STATUS mtdHwXmdioWrite +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + IN MTD_U16 value +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + dev - MMD device address, 0-31 + reg - MMD register address + value - data to write + + Outputs: + None + + Returns: + MTD_OK - wrote successfully + MTD_FAIL - an error occurred + + Description: + Writes a 16-bit word to the MDIO + Address is in format X.Y.Z, where X selects the MDIO port (0-31), Y selects + the MMD/Device (0-31), and Z selects the register. + + Side effects: + None + + Notes/Warnings: + None + +******************************************************************************/ +MTD_STATUS mtdHwXmdioWrite +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + IN MTD_U16 value +); + +/****************************************************************************** + MTD_STATUS mtdHwXmdioRead + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + OUT MTD_U16 *data + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + dev - MMD device address, 0-31 + reg - MMD register address + + Outputs: + data - Returns 16 bit word from the MDIO + + Returns: + MTD_OK - read successful + MTD_FAIL - read was unsuccessful + + Description: + Reads a 16-bit word from the MDIO + Address is in format X.Y.Z, where X selects the MDIO port (0-31), Y selects the + MMD/Device (0-31), and Z selects the register. + + Side effects: + None + + Notes/Warnings: + None + +******************************************************************************/ +MTD_STATUS mtdHwXmdioRead +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 reg, + OUT MTD_U16 *data +); + + +/******************************************************************************* + MTD_STATUS mtdHwGetPhyRegField + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - The port number, 0-31 + dev - The MMD device, 0-31 + regAddr - The register's address + fieldOffset - The field start bit index. (0 - 15) + fieldLength - Number of bits to read + + Outputs: + data - The read register field + + Returns: + MTD_OK on success, or + MTD_FAIL - on error + + Description: + This function reads a specified field from a port's phy register. + It first reads the register, then returns the specified bit + field from what was read. + + Side effects: + None + + Notes/Warnings: + The sum of fieldOffset & fieldLength parameters must be smaller- + equal to 16 + + Reading a register with latched bits may clear the latched bits. + Use with caution for registers with latched bits. + + To operate on several bits within a register which has latched bits + before reading the register again, first read the register with + mtdHwXmdioRead() to get the register value, then operate on the + register data repeatedly using mtdHwGetRegFieldFromWord() to + take apart the bit fields without re-reading the register again. + + This approach should also be used to reduce IO to the PHY when reading + multiple bit fields (do a single read, then grab different fields + from the register by using mtdHwGetRegFieldFromWord() repeatedly). + +*******************************************************************************/ +MTD_STATUS mtdHwGetPhyRegField +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data +); + +/******************************************************************************* + MTD_STATUS mtdHwSetPhyRegField + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + IN MTD_U16 data + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - The port number, 0-31 + dev - The MMD device, 0-31 + regAddr - The register's address + fieldOffset - The field start bit index. (0 - 15) + fieldLength - Number of bits to write + data - Data to be written. + + Outputs: + None. + + Returns: + MTD_OK on success, or + MTD_FAIL - on error + + Description: + This function writes to specified field in a port's phy register. + + Side effects: + None + + Notes/Warnings: + The sum of fieldOffset & fieldLength parameters must be smaller- + equal to 16. + +*******************************************************************************/ +MTD_STATUS mtdHwSetPhyRegField +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 dev, + IN MTD_U16 regAddr, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + IN MTD_U16 data +); + +/******************************************************************************* + MTD_STATUS mtdHwGetRegFieldFromWord + ( + IN MTD_U16 regData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data + ); + + Inputs: + regData - The data previously read from the register + fieldOffset - The field start bit index. (0 - 15) + fieldLength - Number of bits to read + + Outputs: + data - The data from the associated bit field + + Returns: + MTD_OK always + + Description: + This function grabs a value from a bitfield within a word. It could + be used to get the value of a bitfield within a word which was previously + read from the PHY. + + Side effects: + None + + Notes/Warnings: + The sum of fieldOffset & fieldLength parameters must be smaller- + equal to 16 + + This register acts on data passed in. It does no hardware access. + + This function is useful if you want to do 1 register access and then + get different bit fields without doing another register access either + because there are latched bits in the register to avoid another read, + or to keep hardware IO down to improve performance/throughput. + + Example: + + MTD_U16 aword, nibble1, nibble2; + + mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0 + mtdHwGetRegFieldFromWord(aword,0,4,&nibble1); // grab first nibble + mtdHwGetRegFieldFromWord(aword,4,4,&nibble2); // grab second nibble + +*******************************************************************************/ +MTD_STATUS mtdHwGetRegFieldFromWord +( + IN MTD_U16 regData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data +); + +/******************************************************************************* + MTD_STATUS mtdHwSetRegFieldToWord + ( + IN MTD_U16 regData, + IN MTD_U16 bitFieldData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data + ); + + Inputs: + regData - original word to modify + bitFieldData - The data to set the register field to + (must be <= largest value for that bit field, + no range checking is done by this function) + fieldOffset - The field start bit index. (0 - 15) + fieldLength - Number of bits to write to regData + + Outputs: + This function grabs a value from a bitfield within a word. It could + be used to get the value of a bitfield within a word which was previously + read from the PHY. + + Side effects: + None + + Notes/Warnings: + The sum of fieldOffset & fieldLength parameters must be smaller- + equal to 16 + + This register acts on data passed in. It does no hardware access. + + This function is useful if you want to do 1 register access and then + get different bit fields without doing another register access either + because there are latched bits in the register to avoid another read, + or to keep hardware IO down to improve performance/throughput. + + Example: + + MTD_U16 aword, nibble1, nibble2; + + mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0 + mtdHwGetRegFieldFromWord(aword,0,4,&nibble1); // grab first nibble + mtdHwGetRegFieldFromWord(aword,4,4,&nibble2); // grab second nibble + +*******************************************************************************/ +MTD_STATUS mtdHwGetRegFieldFromWord +( + IN MTD_U16 regData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data +); + +/******************************************************************************* + MTD_STATUS mtdHwSetRegFieldToWord + ( + IN MTD_U16 regData, + IN MTD_U16 bitFieldData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data + ); + + Inputs: + regData - original word to modify + bitFieldData - The data to set the register field to + (must be <= largest value for that bit field, + no range checking is done by this function) + fieldOffset - The field start bit index. (0 - 15) + fieldLength - Number of bits to write to regData + + Outputs: + data - The new/modified regData with the bitfield changed + + Returns: + MTD_OK always + + Description: + This function write a value to a bitfield within a word. + + Side effects: + None + + Notes/Warnings: + The sum of fieldOffset & fieldLength parameters must be smaller- + equal to 16 + + This register acts on data passed in. It does no hardware access. + + This function is useful to reduce IO if several bit fields of a register + that has been read is to be changed before writing it back. + + MTD_U16 aword; + + mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0 + mtdHwSetRegFieldToWord(aword,2,0,4,&aword); // Change first nibble to 2 + mtdHwSetRegFieldToWord(aword,3,4,4,&aword); // Change second nibble to 3 + +*******************************************************************************/ +MTD_STATUS mtdHwSetRegFieldToWord +( + IN MTD_U16 regData, + IN MTD_U16 bitFieldData, + IN MTD_U8 fieldOffset, + IN MTD_U8 fieldLength, + OUT MTD_U16 *data +); + + +/****************************************************************************** +MTD_STATUS mtdWait +( + IN MTD_DEV_PTR devPtr, + IN unsigned x +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + x - number of milliseconds to wait + + Outputs: + None + + Returns: + MTD_OK if wait was successful, MTD_FAIL otherwise + + Description: + Waits X milliseconds + + Side effects: + None + + Notes/Warnings: + None + +******************************************************************************/ +MTD_STATUS mtdWait +( + IN MTD_UINT x +); + +/****************************************************************************** +MTD_STATUS mtdSoftwareReset +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 timeoutMs +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + timeoutMs - 0 will not wait for reset to complete, otherwise + waits 'timeout' milliseconds for reset to complete + + Outputs: + None + + Returns: + MTD_OK or MTD_FAIL if IO error or timed out + + Description: + Issues a software reset (1.0.15 <= 1) command. Resets firmware and + hardware state machines and returns non-retain bits to their hardware + reset values and retain bits keep their values through the reset. + + If timeoutMs is 0, returns immediately. If timeoutMs is non-zero, + waits up to 'timeoutMs' milliseconds looking for the reset to complete + before returning. Returns MTD_FAIL if times out. + + Side effects: + All "retain" bits keep their values through this reset. Non-"retain"-type + bits are returned to their hardware reset values following this reset. + See the Datasheet for a list of retain bits. + + Notes/Warnings: + Use mtdIsPhyReadyAfterReset() to see if the software reset is complete + before issuing any other MDIO commands following this reset or pass + in non-zero timeoutMs to have this function do it for you. + + This is a T unit software reset only. It may only be issued if the T + unit is ready (1.0.15 is 0) and the T unit is not in low power mode. + +******************************************************************************/ +MTD_STATUS mtdSoftwareReset +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 timeoutMs +); + +MTD_STATUS mtdHardwareReset +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 timeoutMs +); + +/****************************************************************************** + MTD_STATUS mtdSetMacInterfaceControl + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 macType, + IN MTD_BOOL macIfPowerDown, + IN MTD_U16 macIfSnoopSel, + IN MTD_U16 macIfActiveLaneSelect, + IN MTD_U16 macLinkDownSpeed, + IN MTD_U16 macMaxIfSpeed, - 33X0/E20X0 devices only - + IN MTD_BOOL doSwReset, + IN MTD_BOOL rerunSerdesInitialization + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - port number, 0-31 + macType - the type of MAC interface being used (the hardware interface). One of the following: + MTD_MAC_TYPE_RXAUI_SGMII_AN_EN - selects RXAUI with SGMII AN enabled + MTD_MAC_TYPE_RXAUI_SGMII_AN_DIS - selects RXAUI with SGMII AN disabled (not valid on X3310) + MTD_MAC_TYPE_XAUI_RATE_ADAPT - selects XAUI with rate matching (only valid on X3310) + MTD_MAC_TYPE_RXAUI_RATE_ADAPT - selects RXAUI with rate matching + MTD_MAC_TYPE_XAUI - selects XAUI (only valid on X3310) + MTD_MAC_TYPE_XFI_SGMII_AN_EN - selects XFI with SGMII AN enabled + MTD_MAC_TYPE_XFI_SGMII_AN_DIS - selects XFI with SGMII AN disabled + MTD_MAC_TYPE_XFI_RATE_ADAPT - selects XFI with rate matching + MTD_MAC_TYPE_USXGMII - selects USXGMII + MTD_MAC_LEAVE_UNCHANGED - option to leave this parameter unchanged/as it is + macIfPowerDown - MTD_TRUE if the host interface is always to be powered up + MTD_FALSE if the host interface can be powered down under + certain circumstances (see datasheet) + macIfSnoopSel - If snooping is requested on the other lane, selects the source + MTD_MAC_SNOOP_FROM_NETWORK - source of snooped data is to come from the network + MTD_MAC_SNOOP_FROM_HOST - source of snooped data is to come from the host + MTD_MAC_SNOOP_OFF - snooping is to be turned off + MTD_MAC_SNOOP_LEAVE_UNCHANGED - option to leave this parameter unchanged/as it is + macIfActiveLaneSelect - For redundant host mode, this selects the active lane. 0 or 1 + only. 0 selects 0 as the active lane and 1 as the standby. 1 selects the other way. + macLinkDownSpeed - The speed the mac interface should run when the media side is + link down. One of the following: + MTD_MAC_SPEED_10_MBPS + MTD_MAC_SPEED_100_MBPS + MTD_MAC_SPEED_1000_MBPS + MTD_MAC_SPEED_10_GBPS + MTD_MAC_SPEED_LEAVE_UNCHANGED + macMaxIfSpeed - For X33X0/E20X0 devices only. Can be used to limit the Mac interface speed + MTD_MAX_MAC_SPEED_10G + MTD_MAX_MAC_SPEED_5G + MTD_MAX_MAC_SPEED_2P5G + MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED + MTD_MAX_MAC_SPEED_NOT_APPLICABLE (for 32X0 devices pass this) + doSwReset - MTD_TRUE if a software reset (31.F001.15) should be done after these changes + have been made, or MTD_FALSE otherwise. See note below. + rerunSerdesInitialization - MTD_TRUE if any parameter that is likely to change the speed + of the serdes interface was performed like macLinkDownSpeed or macType will attempt + to reset the H unit serdes (this needs to be done AFTER the soft reset, so if doSwReset + is passed as MTD_FALSE, host must later call + mtdRerunSerdesAutoInitializationUseAutoMode() eventually to re-init the serdes). + + + Outputs: + None + + Returns: + MTD_OK or MTD_FAIL if a bad parameter was passed, or an IO error occurs. + + Description: + Changes the above parameters as indicated in 31.F000 and 31.F001 and + optionally does a software reset afterwards for those bits which require a + software reset to take effect. + + Side effects: + None + + Notes/Warnings: + These bits are actually in the C unit, but pertain to the host interface + control so the API called was placed here. + + Changes to the MAC type (31.F001.2:0) do not take effect until a software + reset is performed on the port. + + Changes to macLinkDownSpeed (31.F001.7:6) require 2 software resets to + take effect. This function will do 2 resets if doSwReset is MTD_TRUE + and macLinkDownSpeed is being changed. + + IMPORTANT: the readback reads back the last written value following + a software reset. Writes followed by reads without an intervening + software reset will read back the old bit value for all those bits + requiring a software. + + Because of this, read-modify-writes to different bitfields must have an + intervening software reset to pick up the latest value before doing + another read-modify-write to the register, otherwise the bitfield + may lose the value. + + Suggest always setting doSwReset to MTD_TRUE to avoid problems of + possibly losing changes. + +******************************************************************************/ +MTD_STATUS mtdSetMacInterfaceControl +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 macType, + IN MTD_BOOL macIfPowerDown, + IN MTD_U16 macIfSnoopSel, + IN MTD_U16 macIfActiveLaneSelect, + IN MTD_U16 macLinkDownSpeed, + IN MTD_U16 macMaxIfSpeed, /* 33X0/E20X0 devices only */ + IN MTD_BOOL doSwReset, + IN MTD_BOOL rerunSerdesInitialization +); + +/****************************************************************************** + MTD_STATUS mtdEnableSpeeds + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 speed_bits, + IN MTD_BOOL anRestart + ); + + Inputs: 2 + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + speed_bits - speeds to be advertised during auto-negotiation. One or more + of the following (bits logically OR together): + MTD_ADV_NONE (no bits set) + MTD_SPEED_10M_HD + MTD_SPEED_10M_FD + MTD_SPEED_100M_HD + MTD_SPEED_100M_FD + MTD_SPEED_1GIG_HD + MTD_SPEED_1GIG_FD + MTD_SPEED_10GIG_FD + MTD_SPEED_2P5GIG_FD (88X33X0/88E20X0 family only) + MTD_SPEED_5GIG_FD (88X33X0/88E20X0 family only) + MTD_SPEED_ALL + MTD_SPEED_ALL_33X0 (88X33X0/88E20X0 family only) + + anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates + if auto-negotiation should be restarted following the speed + enable change. If this is MTD_FALSE, the change will not + take effect until AN is restarted in some other way (link + drop, toggle low power, toggle AN enable, toggle soft reset). + + If this is MTD_TRUE and AN has been disabled, it will be + enabled before being restarted. + + Outputs: + None + + Returns: + MTD_OK if action was successfully taken, MTD_FAIL if not. Also returns + MTD_FAIL if try to force the speed or try to advertise a speed not supported + on this PHY. + + Description: + This function allows the user to select the speeds to be advertised to the + link partner during auto-negotiation. + + First, this function enables auto-negotiation and XNPs by calling + mtdUndoForcedSpeed(). + + The function takes in a 16 bit value and sets the appropriate bits in MMD + 7 to have those speeds advertised. + + The function also checks if the input parameter is MTD_ADV_NONE, in which case + all speeds are disabled effectively disabling the phy from training + (but not disabling auto-negotiation). + + If anRestart is MTD_TRUE, an auto-negotiation restart is issued making the change + immediate. If anRestart is MTD_FALSE, the change will not take effect until the + next time auto-negotiation restarts. + + Side effects: + Setting speed in 1.0 to 10GBASE-T has the effect of enabling XNPs in 7.0 and + enabling auto-negotiation in 7.0. + + Notes/Warnings: + + Example: + To train the highest speed matching the far end among + either 1000BASE-T Full-duplex or 10GBASE-T: + mtdEnableSpeeds(devPtr,port,MTD_SPEED_1GIG_FD | MTD_SPEED_10GIG_FD, MTD_TRUE); + + To allow only 10GBASE-T to train: + mtdEnableSpeeds(devPtr,port,MTD_SPEED_10GIG_FD, MTD_TRUE); + + To disable all speeds (but AN will still be running, just advertising no + speeds) + mtdEnableSpeeds(devPtr,port,MTD_ADV_NONE, MTD_TRUE); + + This function is not to be used to disable autonegotiation and force the speed + to 10BASE-T or 100BASE-TX. Use mtdForceSpeed() for this. + + 88X33X0 Z1/Z2 and E20X0 Z2 are not supported starting with API version 1.2. + Version 1.2 and later require A0 revision of these devices. + +******************************************************************************/ +MTD_STATUS mtdEnableSpeeds +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U16 speed_bits, + IN MTD_BOOL anRestart +); + +MTD_STATUS mtdGetAutonegSpeedDuplexResolution +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U16 *speedResolution +); + +MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *anSpeedResolutionDone +); + +/****************************************************************************/ +/******************************************************************* + Firmware Version + *******************************************************************/ +/****************************************************************************/ + +/****************************************************************************** +MTD_STATUS mtdGetFirmwareVersion +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *major, + OUT MTD_U8 *minor, + OUT MTD_U8 *inc, + OUT MTD_U8 *test +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + major - major version, X.Y.Z.W, the X + minor - minor version, X.Y.Z.W, the Y + inc - incremental version, X.Y.Z.W, the Z + test - test version, X.Y.Z.W, the W, should be 0 for released code, + non-zero indicates this is a non-released code + + Returns: + MTD_FAIL if version can't be queried or firmware is in download mode + (meaning all version numbers are 0), MTD_OK otherwise + + Description: + This function reads the firmware version number and stores it in the + pointers passed in by the user. + + Side effects: + None + + Notes/Warnings: + This function returns all 0's if the phy is in download mode. The phy + application code must have started and be ready before issuing this + command. + +******************************************************************************/ +MTD_STATUS mtdGetFirmwareVersion +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *major, + OUT MTD_U8 *minor, + OUT MTD_U8 *inc, + OUT MTD_U8 *test +); + +/****************************************************************************** +MTD_STATUS mtdSetPauseAdvertisement +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U8 pauseType, + IN MTD_BOOL anRestart +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + pauseType - one of the following: + MTD_SYM_PAUSE, + MTD_ASYM_PAUSE, + MTD_SYM_ASYM_PAUSE or + MTD_CLEAR_PAUSE. + anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates + if auto-negotiation should be restarted following the speed + enable change. If this is MTD_FALSE, the change will not + take effect until AN is restarted in some other way (link + drop, toggle low power, toggle AN enable, toggle soft reset). + + If this is MTD_TRUE and AN has been disabled, it will be + enabled before being restarted. + + Outputs: + None + + Returns: + MTD_OK or MTD_FAIL, if action was successful or failed + + Description: + This function sets the asymmetric and symmetric pause bits in the technology + ability field in the AN Advertisement register and optionally restarts + auto-negotiation to use the new values. This selects what type of pause + is to be advertised to the far end MAC during auto-negotiation. If + auto-negotiation is restarted, it is enabled first. + + Sets entire 2-bit field to the value passed in pauseType. + + To clear both bits, pass in MTD_CLEAR_PAUSE. + + Side effects: + None + + Notes/Warnings: + This function will not take effect unless the auto-negotiation is restarted. + +******************************************************************************/ +MTD_STATUS mtdSetPauseAdvertisement +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_U32 pauseType, + IN MTD_BOOL anRestart +); + + +/****************************************************************************** +MTD_STATUS mtdGetLPAdvertisedPause +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *pauseBits +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + pauseBits - setting of link partner's pause bits based on bit definitions above in + mtdmtdSetPauseAdvertisement() + + Returns: + MTD_OK or MTD_FAIL, based on whether the query succeeded or failed. Returns + MTD_FAIL and MTD_CLEAR_PAUSE if AN is not complete. + + Description: + This function reads 7.19 (LP Base page ability) and returns the advertised + pause setting that was received from the link partner. + + Side effects: + None + + Notes/Warnings: + The user must make sure auto-negotiation has completed by calling + mtdAutonegIsCompleted() prior to calling this function. + +******************************************************************************/ +MTD_STATUS mtdGetLPAdvertisedPause +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_U8 *pauseBits +); + + + +/****************************************************************************** +MTD_STATUS mtdGetPhyRevision +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_DEVICE_ID *phyRev, + OUT MTD_U8 *numPorts, + OUT MTD_U8 *thisPort +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + phyRev - revision of this chip, see MTD_DEVICE_ID definition for + a list of chip revisions with different options + numPorts - number of ports on this chip (see note below) + thisPort - this port number 0-1, or 0-4 + + Returns: + MTD_OK if query was successful, MTD_FAIL if not. + + Will return MTD_FAIL on an unsupported PHY (but will attempt to + return correct version). See below for a list of unsupported PHYs. + + Description: + Determines the PHY revision and returns the value in phyRev. + See definition of MTD_DEVICE_ID for a list of available + devices and capabilities. + + Side effects: + None. + + Notes/Warnings: + The phyRev can be used to determine number PHY revision, + number of ports, which port this is from PHY perspective + (0-based indexing 0...3 or 0..2) and what capabilities + the PHY has. + + If phyRev is MTD_REV_UNKNOWN, numPorts and thisPort will be returned + as 0 and the function will return MTD_FAIL. + + If T-unit is in download mode, thisPort will be returned as 0. + + 88X33X0 Z1/Z2 is not supported starting with version 1.2 of API. + E20X0 Z2 is not supported starting with version 1.2 of API. + +******************************************************************************/ +MTD_STATUS mtdGetPhyRevision +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_DEVICE_ID *phyRev, + OUT MTD_U8 *numPorts, + OUT MTD_U8 *thisPort +); + + + +/***************************************************************************** +MTD_STATUS mtdGetForcedSpeed +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *speedIsForced, + OUT MTD_U16 *forcedSpeed +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + speedIsForced - MTD_TRUE if an is disabled (1.0.12 == 0) AND + the speed in 1.0.13/6 is set to 10BT or 100BT (speeds which do + not require an to train). + forcedSpeed - one of the following if speedIsForced is MTD_TRUE + MTD_SPEED_10M_HD_AN_DIS - speed forced to 10BT half-duplex + MTD_SPEED_10M_FD_AN_DIS - speed forced to 10BT full-duplex + MTD_SPEED_100M_HD_AN_DIS - speed forced to 100BT half-duplex + MTD_SPEED_100M_FD_AN_DIS - speed forced to 100BT full-duplex + + Returns: + MTD_OK if the query was successful, or MTD_FAIL if not + + Description: + Checks if AN is disabled (7.0.12=0) and if the speed select in + register 1.0.13 and 1.0.6 is set to either 10BT or 100BT speeds. If + all of this is true, returns MTD_TRUE in speedIsForced along with + the speed/duplex setting in forcedSpeedBits. If any of this is + false (AN is enabled, or the speed is set to 1000BT or 10GBT), then + speedIsForced is returned MTD_FALSE and the forcedSpeedBit value + is invalid. + + Notes/Warnings: + None. + +******************************************************************************/ +MTD_STATUS mtdGetForcedSpeed +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + OUT MTD_BOOL *speedIsForced, + OUT MTD_U16 *forcedSpeed +); + + +/***************************************************************************** +MTD_STATUS mtdUndoForcedSpeed +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_BOOL anRestart +); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates + if auto-negotiation should be restarted following the speed + enable change. If this is MTD_FALSE, the change will not + take effect until AN is restarted in some other way (link + drop, toggle low power, toggle AN enable, toggle soft reset). + + If this is MTD_TRUE and AN has been disabled, it will be + enabled before being restarted. + + Outputs: + None + + Returns: + MTD_OK if the change was successful, or MTD_FAIL if not + + Description: + Sets the speed bits in 1.0 back to the power-on default of 11b + (10GBASE-T). Enables auto-negotiation. + + Does a software reset of the T unit and wait until it is complete before + enabling AN and returning. + + Notes/Warnings: + None. + +******************************************************************************/ +MTD_STATUS mtdUndoForcedSpeed +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port, + IN MTD_BOOL anRestart +); + + +/****************************************************************************** + MTD_STATUS mtdAutonegEnable + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + None + + Returns: + MTD_OK or MTD_FAIL, if action was successful or not + + Description: + Re-enables auto-negotiation. + + Side effects: + + Notes/Warnings: + Restart autonegation will not take effect if AN is disabled. + +******************************************************************************/ +MTD_STATUS mtdAutonegEnable +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port +); + + + +/****************************************************************************** + MTD_STATUS mtdAutonegRestart + ( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port + ); + + Inputs: + devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call + port - MDIO port address, 0-31 + + Outputs: + None + + Returns: + MTD_OK or MTD_FAIL, depending on if action was successful + + Description: + Restarts auto-negotiation. The bit is self-clearing. If the link is up, + the link will drop and auto-negotiation will start again. + + Side effects: + None. + + Notes/Warnings: + Restarting auto-negotiation will have no effect if auto-negotiation is + disabled. + + This function is important as it is necessary to restart auto-negotiation + after changing many auto-negotiation settings before the changes will take + effect. + +******************************************************************************/ +MTD_STATUS mtdAutonegRestart +( + IN MTD_DEV_PTR devPtr, + IN MTD_U16 port +); + + + +/****************************************************************************** +MTD_STATUS mtdIsPhyRevisionValid +( + IN MTD_DEVICE_ID phyRev +); + + + Inputs: + phyRev - a revision id to be checked against MTD_DEVICE_ID type + + Outputs: + None + + Returns: + MTD_OK if phyRev is a valid revision, MTD_FAIL otherwise + + Description: + Takes phyRev and returns MTD_OK if it is one of the MTD_DEVICE_ID + type, otherwise returns MTD_FAIL. + + Side effects: + None. + + Notes/Warnings: + None + +******************************************************************************/ +MTD_STATUS mtdIsPhyRevisionValid +( + IN MTD_DEVICE_ID phyRev +); + +#if C_LINKAGE +#if defined __cplusplus +} +#endif +#endif + +#endif /* _TXGBE_MTD_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_param.c b/drivers/net/ethernet/netswift/txgbe/txgbe_param.c new file mode 100644 index 000000000000..214993fb1a9b --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_param.c @@ -0,0 +1,1191 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_param.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#include <linux/types.h> +#include <linux/module.h> + +#include "txgbe.h" + +/* This is the only thing that needs to be changed to adjust the + * maximum number of ports that the driver can manage. + */ +#define TXGBE_MAX_NIC 32 +#define OPTION_UNSET -1 +#define OPTION_DISABLED 0 +#define OPTION_ENABLED 1 + +#define STRINGIFY(foo) #foo /* magic for getting defines into strings */ +#define XSTRINGIFY(bar) STRINGIFY(bar) + +#define TXGBE_PARAM_INIT { [0 ... TXGBE_MAX_NIC] = OPTION_UNSET } + +#define TXGBE_PARAM(X, desc) \ + static int X[TXGBE_MAX_NIC+1] = TXGBE_PARAM_INIT; \ + static unsigned int num_##X; \ + module_param_array(X, int, &num_##X, 0); \ + MODULE_PARM_DESC(X, desc); + +/* ffe_main (KR/KX4/KX/SFI) + * + * Valid Range: 0-60 + * + * Default Value: 27 + */ +TXGBE_PARAM(FFE_MAIN, + "TX_EQ MAIN (0 - 40)"); +#define TXGBE_DEFAULT_FFE_MAIN 27 + +/* ffe_pre + * + * Valid Range: 0-60 + * + * Default Value: 8 + */ + +TXGBE_PARAM(FFE_PRE, + "TX_EQ PRE (0 - 40)"); +#define TXGBE_DEFAULT_FFE_PRE 8 + +/* ffe_post (VF Alloc Mode) + * + * Valid Range: 0-60 + * + * Default Value: 44 + */ + +TXGBE_PARAM(FFE_POST, + "TX_EQ POST (0 - 40)"); +#define TXGBE_DEFAULT_FFE_POST 44 + +/* ffe_set + * + * Valid Range: 0-4 + * + * Default Value: 0 + */ + +TXGBE_PARAM(FFE_SET, + "TX_EQ SET must choose to take effect (0 = NULL, 1 = sfi, 2 = kr, 3 = kx4, 4 = kx)"); +#define TXGBE_DEFAULT_FFE_SET 0 + +/* backplane_mode + * + * Valid Range: 0-4 + * - 0 - NULL + * - 1 - sfi + * - 2 - kr + * - 3 - kx4 + * - 4 - kx + * + * Default Value: 0 + */ + +TXGBE_PARAM(backplane_mode, + "Backplane Mode Support(0 = NULL, 1 = sfi, 2 = kr, 3 = kx4, 4 = kx)"); + +#define TXGBE_BP_NULL 0 +#define TXGBE_BP_SFI 1 +#define TXGBE_BP_KR 2 +#define TXGBE_BP_KX4 3 +#define TXGBE_BP_KX 4 +#define TXGBE_DEFAULT_BP_MODE TXGBE_BP_NULL + +/* backplane_auto + * + * Valid Range: 0-1 + * - 0 - NO AUTO + * - 1 - AUTO + * Default Value: 0 + */ + +TXGBE_PARAM(backplane_auto, + "Backplane AUTO mode (0 = NO AUTO, 1 = AUTO)"); + +#define TXGBE_BP_NAUTO 0 +#define TXGBE_BP_AUTO 1 +#define TXGBE_DEFAULT_BP_AUTO -1 + +/* VF_alloc_mode (VF Alloc Mode) + * + * Valid Range: 0-1 + * - 0 - 2 * 64 + * - 1 - 4 * 32 + * - 2 - 8 * 16 + * + * Default Value: 2 + */ + +TXGBE_PARAM(vf_alloc_mode, + "Change VF Alloc Mode (0 = 2*64, 1 = 4*32, 2 = 8*16)"); + +#define TXGBE_2Q 0 +#define TXGBE_4Q 1 +#define TXGBE_8Q 2 +#define TXGBE_DEFAULT_NUMQ TXGBE_2Q + +/* IntMode (Interrupt Mode) + * + * Valid Range: 0-2 + * - 0 - Legacy Interrupt + * - 1 - MSI Interrupt + * - 2 - MSI-X Interrupt(s) + * + * Default Value: 2 + */ + +TXGBE_PARAM(InterruptType, + "Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), " + "default IntMode (deprecated)"); + +TXGBE_PARAM(IntMode, + "Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), " + "default 2"); + +#define TXGBE_INT_LEGACY 0 +#define TXGBE_INT_MSI 1 +#define TXGBE_INT_MSIX 2 +#define TXGBE_DEFAULT_INT TXGBE_INT_MSIX + +/* MQ - Multiple Queue enable/disable + * + * Valid Range: 0, 1 + * - 0 - disables MQ + * - 1 - enables MQ + * + * Default Value: 1 + */ + +TXGBE_PARAM(MQ, + "Disable or enable Multiple Queues, default 1"); + +/* RSS - Receive-Side Scaling (RSS) Descriptor Queues + * + * Valid Range: 0-64 + * - 0 - enables RSS and sets the Desc. Q's to min(64, num_online_cpus()). + * - 1-64 - enables RSS and sets the Desc. Q's to the specified value. + * + * Default Value: 0 + */ + +TXGBE_PARAM(RSS, + "Number of Receive-Side Scaling Descriptor Queues, " + "default 0=number of cpus"); + +/* VMDQ - Virtual Machine Device Queues (VMDQ) + * + * Valid Range: 1-16 + * - 1 Disables VMDQ by allocating only a single queue. + * - 2-16 - enables VMDQ and sets the Desc. Q's to the specified value. + * + * Default Value: 1 + */ + +#define TXGBE_DEFAULT_NUM_VMDQ 8 + +TXGBE_PARAM(VMDQ, + "Number of Virtual Machine Device Queues: 0/1 = disable, " + "2-16 enable (default=" XSTRINGIFY(TXGBE_DEFAULT_NUM_VMDQ) ")"); + +/* Interrupt Throttle Rate (interrupts/sec) + * + * Valid Range: 980-500000 (0=off, 1=dynamic) + * + * Default Value: 1 + */ +#define DEFAULT_ITR 1 +TXGBE_PARAM(InterruptThrottleRate, + "Maximum interrupts per second, per vector, " + "(0,1,980-500000), default 1"); + +#define MAX_ITR TXGBE_MAX_INT_RATE +#define MIN_ITR TXGBE_MIN_INT_RATE + +/* LLIPort (Low Latency Interrupt TCP Port) + * + * Valid Range: 0 - 65535 + * + * Default Value: 0 (disabled) + */ +TXGBE_PARAM(LLIPort, + "Low Latency Interrupt TCP Port (0-65535)"); + +#define DEFAULT_LLIPORT 0 +#define MAX_LLIPORT 0xFFFF +#define MIN_LLIPORT 0 + +/* LLISize (Low Latency Interrupt on Packet Size) + * + * Valid Range: 0 - 1500 + * + * Default Value: 0 (disabled) + */ + +TXGBE_PARAM(LLISize, + "Low Latency Interrupt on Packet Size (0-1500)"); + +#define DEFAULT_LLISIZE 0 +#define MAX_LLISIZE 1500 +#define MIN_LLISIZE 0 + +/* LLIEType (Low Latency Interrupt Ethernet Type) + * + * Valid Range: 0 - 0x8fff + * + * Default Value: 0 (disabled) + */ + +TXGBE_PARAM(LLIEType, + "Low Latency Interrupt Ethernet Protocol Type"); + +#define DEFAULT_LLIETYPE 0 +#define MAX_LLIETYPE 0x8fff +#define MIN_LLIETYPE 0 + +/* LLIVLANP (Low Latency Interrupt on VLAN priority threshold) + * + * Valid Range: 0 - 7 + * + * Default Value: 0 (disabled) + */ + +TXGBE_PARAM(LLIVLANP, + "Low Latency Interrupt on VLAN priority threshold"); + +#define DEFAULT_LLIVLANP 0 +#define MAX_LLIVLANP 7 +#define MIN_LLIVLANP 0 + +/* Flow Director packet buffer allocation level + * + * Valid Range: 1-3 + * 1 = 8k hash/2k perfect, + * 2 = 16k hash/4k perfect, + * 3 = 32k hash/8k perfect + * + * Default Value: 0 + */ + +TXGBE_PARAM(FdirPballoc, + "Flow Director packet buffer allocation level:\n" + "\t\t\t1 = 8k hash filters or 2k perfect filters\n" + "\t\t\t2 = 16k hash filters or 4k perfect filters\n" + "\t\t\t3 = 32k hash filters or 8k perfect filters"); + +#define TXGBE_DEFAULT_FDIR_PBALLOC TXGBE_FDIR_PBALLOC_64K + +/* Software ATR packet sample rate + * + * Valid Range: 0-255 0 = off, 1-255 = rate of Tx packet inspection + * + * Default Value: 20 + */ + +TXGBE_PARAM(AtrSampleRate, + "Software ATR Tx packet sample rate"); + +#define TXGBE_MAX_ATR_SAMPLE_RATE 255 +#define TXGBE_MIN_ATR_SAMPLE_RATE 1 +#define TXGBE_ATR_SAMPLE_RATE_OFF 0 +#define TXGBE_DEFAULT_ATR_SAMPLE_RATE 20 + +/* Enable/disable Large Receive Offload + * + * Valid Values: 0(off), 1(on) + * + * Default Value: 1 + */ + +TXGBE_PARAM(LRO, + "Large Receive Offload (0,1), default 1 = on"); + +/* Enable/disable support for untested SFP+ modules on adapters + * + * Valid Values: 0(Disable), 1(Enable) + * + * Default Value: 0 + */ + +TXGBE_PARAM(allow_unsupported_sfp, + "Allow unsupported and untested " + "SFP+ modules on adapters, default 0 = Disable"); + +/* Enable/disable support for DMA coalescing + * + * Valid Values: 0(off), 41 - 10000(on) + * + * Default Value: 0 + */ + +TXGBE_PARAM(dmac_watchdog, + "DMA coalescing watchdog in microseconds (0,41-10000)," + "default 0 = off"); + +/* Enable/disable support for VXLAN rx checksum offload + * + * Valid Values: 0(Disable), 1(Enable) + * + * Default Value: 1 on hardware that supports it + */ + +TXGBE_PARAM(vxlan_rx, + "VXLAN receive checksum offload (0,1), default 1 = Enable"); + +/* Rx buffer mode + * + * Valid Range: 0-1 0 = no header split, 1 = hdr split + * + * Default Value: 0 + */ + +TXGBE_PARAM(RxBufferMode, + "0=(default)no header split\n" + "\t\t\t1=hdr split for recognized packet\n"); + +#define TXGBE_RXBUFMODE_NO_HEADER_SPLIT 0 +#define TXGBE_RXBUFMODE_HEADER_SPLIT 1 +#define TXGBE_DEFAULT_RXBUFMODE TXGBE_RXBUFMODE_NO_HEADER_SPLIT + +/* Cloud Switch mode + * + * Valid Range: 0-1 0 = disable Cloud Switch, 1 = enable Cloud Switch + * + * Default Value: 0 + */ + +TXGBE_PARAM(CloudSwitch, + "Cloud Switch (0,1), default 0 = disable, 1 = enable"); + +struct txgbe_option { + enum { enable_option, range_option, list_option } type; + const char *name; + const char *err; + const char *msg; + int def; + union { + struct { /* range_option info */ + int min; + int max; + } r; + struct { /* list_option info */ + int nr; + const struct txgbe_opt_list { + int i; + char *str; + } *p; + } l; + } arg; +}; + +static int txgbe_validate_option(u32 *value, + struct txgbe_option *opt) +{ + int val = (int)*value; + + if (val == OPTION_UNSET) { + txgbe_info("txgbe: Invalid %s specified (%d), %s\n", + opt->name, val, opt->err); + *value = (u32)opt->def; + return 0; + } + + switch (opt->type) { + case enable_option: + switch (val) { + case OPTION_ENABLED: + txgbe_info("txgbe: %s Enabled\n", opt->name); + return 0; + case OPTION_DISABLED: + txgbe_info("txgbe: %s Disabled\n", opt->name); + return 0; + } + break; + case range_option: + if ((val >= opt->arg.r.min && val <= opt->arg.r.max) || + val == opt->def) { + if (opt->msg) + txgbe_info("txgbe: %s set to %d, %s\n", + opt->name, val, opt->msg); + else + txgbe_info("txgbe: %s set to %d\n", + opt->name, val); + return 0; + } + break; + case list_option: { + int i; + const struct txgbe_opt_list *ent; + + for (i = 0; i < opt->arg.l.nr; i++) { + ent = &opt->arg.l.p[i]; + if (val == ent->i) { + if (ent->str[0] != '\0') + txgbe_info("%s\n", ent->str); + return 0; + } + } + } + break; + default: + BUG_ON(1); + } + + txgbe_info("txgbe: Invalid %s specified (%d), %s\n", + opt->name, val, opt->err); + *value = (u32)opt->def; + return -1; +} + +/** + * txgbe_check_options - Range Checking for Command Line Parameters + * @adapter: board private structure + * + * This routine checks all command line parameters for valid user + * input. If an invalid value is given, or if no user specified + * value exists, a default value is used. The final value is stored + * in a variable in the adapter structure. + **/ +void txgbe_check_options(struct txgbe_adapter *adapter) +{ + u32 bd = adapter->bd_number; + u32 *aflags = &adapter->flags; + struct txgbe_ring_feature *feature = adapter->ring_feature; + u32 vmdq; + + if (bd >= TXGBE_MAX_NIC) { + txgbe_notice( + "Warning: no configuration for board #%d\n", bd); + txgbe_notice("Using defaults for all values\n"); + } + { /* MAIN */ + u32 ffe_main; + static struct txgbe_option opt = { + .type = range_option, + .name = "FFE_MAIN", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_MAIN), + .def = TXGBE_DEFAULT_FFE_MAIN, + .arg = { .r = { .min = 0, + .max = 60} } + }; + + if (num_FFE_MAIN > bd) { + ffe_main = FFE_MAIN[bd]; + if (ffe_main == OPTION_UNSET) + ffe_main = FFE_MAIN[bd]; + txgbe_validate_option(&ffe_main, &opt); + adapter->ffe_main = ffe_main; + } else { + adapter->ffe_main = 27; + } + } + + { /* PRE */ + u32 ffe_pre; + static struct txgbe_option opt = { + .type = range_option, + .name = "FFE_PRE", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_PRE), + .def = TXGBE_DEFAULT_FFE_PRE, + .arg = { .r = { .min = 0, + .max = 60} } + }; + + if (num_FFE_PRE > bd) { + ffe_pre = FFE_PRE[bd]; + if (ffe_pre == OPTION_UNSET) + ffe_pre = FFE_PRE[bd]; + txgbe_validate_option(&ffe_pre, &opt); + adapter->ffe_pre = ffe_pre; + } else { + adapter->ffe_pre = 8; + } + } + + { /* POST */ + u32 ffe_post; + static struct txgbe_option opt = { + .type = range_option, + .name = "FFE_POST", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_POST), + .def = TXGBE_DEFAULT_FFE_POST, + .arg = { .r = { .min = 0, + .max = 60} } + }; + + if (num_FFE_POST > bd) { + ffe_post = FFE_POST[bd]; + if (ffe_post == OPTION_UNSET) + ffe_post = FFE_POST[bd]; + txgbe_validate_option(&ffe_post, &opt); + adapter->ffe_post = ffe_post; + } else { + adapter->ffe_post = 44; + } + } + + { /* ffe_set */ + u32 ffe_set; + static struct txgbe_option opt = { + .type = range_option, + .name = "FFE_SET", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_SET), + .def = TXGBE_DEFAULT_FFE_SET, + .arg = { .r = { .min = 0, + .max = 4} } + }; + + if (num_FFE_SET > bd) { + ffe_set = FFE_SET[bd]; + if (ffe_set == OPTION_UNSET) + ffe_set = FFE_SET[bd]; + txgbe_validate_option(&ffe_set, &opt); + adapter->ffe_set = ffe_set; + } else { + adapter->ffe_set = 0; + } + } + + { /* backplane_mode */ + u32 bp_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "backplane_mode", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_BP_MODE), + .def = TXGBE_DEFAULT_BP_MODE, + .arg = { .r = { .min = 0, + .max = 4} } + }; + + if (num_backplane_mode > bd) { + bp_mode = backplane_mode[bd]; + if (bp_mode == OPTION_UNSET) + bp_mode = backplane_mode[bd]; + txgbe_validate_option(&bp_mode, &opt); + adapter->backplane_mode = bp_mode; + } else { + adapter->backplane_mode = 0; + } + } + + { /* auto mode */ + u32 bp_auto; + static struct txgbe_option opt = { + .type = range_option, + .name = "bp_auto", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_BP_AUTO), + .def = TXGBE_DEFAULT_BP_AUTO, + .arg = { .r = { .min = 0, + .max = 2} } + }; + + if (num_backplane_auto > bd) { + bp_auto = backplane_auto[bd]; + if (bp_auto == OPTION_UNSET) + bp_auto = backplane_auto[bd]; + txgbe_validate_option(&bp_auto, &opt); + adapter->backplane_auto = bp_auto; + } else { + adapter->backplane_auto = -1; + } + } + + { /* VF_alloc_mode */ + u32 vf_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "vf_alloc_mode", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_NUMQ), + .def = TXGBE_DEFAULT_NUMQ, + .arg = { .r = { .min = TXGBE_2Q, + .max = TXGBE_8Q} } + }; + + if (num_vf_alloc_mode > bd) { + vf_mode = vf_alloc_mode[bd]; + if (vf_mode == OPTION_UNSET) + vf_mode = vf_alloc_mode[bd]; + txgbe_validate_option(&vf_mode, &opt); + switch (vf_mode) { + case TXGBE_8Q: + adapter->vf_mode = 15; + break; + case TXGBE_4Q: + adapter->vf_mode = 31; + break; + case TXGBE_2Q: + default: + adapter->vf_mode = 63; + break; + } + } else { + adapter->vf_mode = 63; + } + } + { /* Interrupt Mode */ + u32 int_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "Interrupt Mode", + .err = + "using default of "__MODULE_STRING(TXGBE_DEFAULT_INT), + .def = TXGBE_DEFAULT_INT, + .arg = { .r = { .min = TXGBE_INT_LEGACY, + .max = TXGBE_INT_MSIX} } + }; + + if (num_IntMode > bd || num_InterruptType > bd) { + int_mode = IntMode[bd]; + if (int_mode == OPTION_UNSET) + int_mode = InterruptType[bd]; + txgbe_validate_option(&int_mode, &opt); + switch (int_mode) { + case TXGBE_INT_MSIX: + if (!(*aflags & TXGBE_FLAG_MSIX_CAPABLE)) + txgbe_info( + "Ignoring MSI-X setting; " + "support unavailable\n"); + break; + case TXGBE_INT_MSI: + if (!(*aflags & TXGBE_FLAG_MSI_CAPABLE)) { + txgbe_info( + "Ignoring MSI setting; " + "support unavailable\n"); + } else { + *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE; + } + break; + case TXGBE_INT_LEGACY: + default: + *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE; + *aflags &= ~TXGBE_FLAG_MSI_CAPABLE; + break; + } + } else { + /* default settings */ + if (opt.def == TXGBE_INT_MSIX && + *aflags & TXGBE_FLAG_MSIX_CAPABLE) { + *aflags |= TXGBE_FLAG_MSIX_CAPABLE; + *aflags |= TXGBE_FLAG_MSI_CAPABLE; + } else if (opt.def == TXGBE_INT_MSI && + *aflags & TXGBE_FLAG_MSI_CAPABLE) { + *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE; + *aflags |= TXGBE_FLAG_MSI_CAPABLE; + } else { + *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE; + *aflags &= ~TXGBE_FLAG_MSI_CAPABLE; + } + } + } + { /* Multiple Queue Support */ + static struct txgbe_option opt = { + .type = enable_option, + .name = "Multiple Queue Support", + .err = "defaulting to Enabled", + .def = OPTION_ENABLED + }; + + if (num_MQ > bd) { + u32 mq = MQ[bd]; + txgbe_validate_option(&mq, &opt); + if (mq) + *aflags |= TXGBE_FLAG_MQ_CAPABLE; + else + *aflags &= ~TXGBE_FLAG_MQ_CAPABLE; + } else { + if (opt.def == OPTION_ENABLED) + *aflags |= TXGBE_FLAG_MQ_CAPABLE; + else + *aflags &= ~TXGBE_FLAG_MQ_CAPABLE; + } + /* Check Interoperability */ + if ((*aflags & TXGBE_FLAG_MQ_CAPABLE) && + !(*aflags & TXGBE_FLAG_MSIX_CAPABLE)) { + DPRINTK(PROBE, INFO, + "Multiple queues are not supported while MSI-X " + "is disabled. Disabling Multiple Queues.\n"); + *aflags &= ~TXGBE_FLAG_MQ_CAPABLE; + } + } + + { /* Receive-Side Scaling (RSS) */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Receive-Side Scaling (RSS)", + .err = "using default.", + .def = 0, + .arg = { .r = { .min = 0, + .max = 1} } + }; + u32 rss = RSS[bd]; + /* adjust Max allowed RSS queues based on MAC type */ + opt.arg.r.max = txgbe_max_rss_indices(adapter); + + if (num_RSS > bd) { + txgbe_validate_option(&rss, &opt); + /* base it off num_online_cpus() with hardware limit */ + if (!rss) + rss = min_t(int, opt.arg.r.max, + num_online_cpus()); + else + feature[RING_F_FDIR].limit = (u16)rss; + + feature[RING_F_RSS].limit = (u16)rss; + } else if (opt.def == 0) { + rss = min_t(int, txgbe_max_rss_indices(adapter), + num_online_cpus()); + feature[RING_F_RSS].limit = rss; + } + /* Check Interoperability */ + if (rss > 1) { + if (!(*aflags & TXGBE_FLAG_MQ_CAPABLE)) { + DPRINTK(PROBE, INFO, + "Multiqueue is disabled. " + "Limiting RSS.\n"); + feature[RING_F_RSS].limit = 1; + } + } + adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED; + } + { /* Virtual Machine Device Queues (VMDQ) */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Virtual Machine Device Queues (VMDQ)", + .err = "defaulting to Disabled", + .def = OPTION_DISABLED, + .arg = { .r = { .min = OPTION_DISABLED, + .max = TXGBE_MAX_VMDQ_INDICES + } } + }; + + if (num_VMDQ > bd) { + vmdq = VMDQ[bd]; + + txgbe_validate_option(&vmdq, &opt); + + /* zero or one both mean disabled from our driver's + * perspective */ + if (vmdq > 1) { + *aflags |= TXGBE_FLAG_VMDQ_ENABLED; + } else + *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED; + + feature[RING_F_VMDQ].limit = (u16)vmdq; + } else { + if (opt.def == OPTION_DISABLED) + *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED; + else + *aflags |= TXGBE_FLAG_VMDQ_ENABLED; + + feature[RING_F_VMDQ].limit = opt.def; + } + /* Check Interoperability */ + if (*aflags & TXGBE_FLAG_VMDQ_ENABLED) { + if (!(*aflags & TXGBE_FLAG_MQ_CAPABLE)) { + DPRINTK(PROBE, INFO, + "VMDQ is not supported while multiple " + "queues are disabled. " + "Disabling VMDQ.\n"); + *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED; + feature[RING_F_VMDQ].limit = 0; + } + } + } + + { /* Interrupt Throttling Rate */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Interrupt Throttling Rate (ints/sec)", + .err = "using default of "__MODULE_STRING(DEFAULT_ITR), + .def = DEFAULT_ITR, + .arg = { .r = { .min = MIN_ITR, + .max = MAX_ITR } } + }; + + if (num_InterruptThrottleRate > bd) { + u32 itr = InterruptThrottleRate[bd]; + switch (itr) { + case 0: + DPRINTK(PROBE, INFO, "%s turned off\n", + opt.name); + adapter->rx_itr_setting = 0; + break; + case 1: + DPRINTK(PROBE, INFO, "dynamic interrupt " + "throttling enabled\n"); + adapter->rx_itr_setting = 1; + break; + default: + txgbe_validate_option(&itr, &opt); + /* the first bit is used as control */ + adapter->rx_itr_setting = (u16)((1000000/itr) << 2); + break; + } + adapter->tx_itr_setting = adapter->rx_itr_setting; + } else { + adapter->rx_itr_setting = opt.def; + adapter->tx_itr_setting = opt.def; + } + } + + { /* Low Latency Interrupt TCP Port*/ + static struct txgbe_option opt = { + .type = range_option, + .name = "Low Latency Interrupt TCP Port", + .err = "using default of " + __MODULE_STRING(DEFAULT_LLIPORT), + .def = DEFAULT_LLIPORT, + .arg = { .r = { .min = MIN_LLIPORT, + .max = MAX_LLIPORT } } + }; + + if (num_LLIPort > bd) { + adapter->lli_port = LLIPort[bd]; + if (adapter->lli_port) { + txgbe_validate_option(&adapter->lli_port, &opt); + } else { + DPRINTK(PROBE, INFO, "%s turned off\n", + opt.name); + } + } else { + adapter->lli_port = opt.def; + } + } + { /* Low Latency Interrupt on Packet Size */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Low Latency Interrupt on Packet Size", + .err = "using default of " + __MODULE_STRING(DEFAULT_LLISIZE), + .def = DEFAULT_LLISIZE, + .arg = { .r = { .min = MIN_LLISIZE, + .max = MAX_LLISIZE } } + }; + + if (num_LLISize > bd) { + adapter->lli_size = LLISize[bd]; + if (adapter->lli_size) { + txgbe_validate_option(&adapter->lli_size, &opt); + } else { + DPRINTK(PROBE, INFO, "%s turned off\n", + opt.name); + } + } else { + adapter->lli_size = opt.def; + } + } + { /* Low Latency Interrupt EtherType*/ + static struct txgbe_option opt = { + .type = range_option, + .name = "Low Latency Interrupt on Ethernet Protocol " + "Type", + .err = "using default of " + __MODULE_STRING(DEFAULT_LLIETYPE), + .def = DEFAULT_LLIETYPE, + .arg = { .r = { .min = MIN_LLIETYPE, + .max = MAX_LLIETYPE } } + }; + + if (num_LLIEType > bd) { + adapter->lli_etype = LLIEType[bd]; + if (adapter->lli_etype) { + txgbe_validate_option(&adapter->lli_etype, + &opt); + } else { + DPRINTK(PROBE, INFO, "%s turned off\n", + opt.name); + } + } else { + adapter->lli_etype = opt.def; + } + } + { /* LLI VLAN Priority */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Low Latency Interrupt on VLAN priority " + "threshold", + .err = "using default of " + __MODULE_STRING(DEFAULT_LLIVLANP), + .def = DEFAULT_LLIVLANP, + .arg = { .r = { .min = MIN_LLIVLANP, + .max = MAX_LLIVLANP } } + }; + + if (num_LLIVLANP > bd) { + adapter->lli_vlan_pri = LLIVLANP[bd]; + if (adapter->lli_vlan_pri) { + txgbe_validate_option(&adapter->lli_vlan_pri, + &opt); + } else { + DPRINTK(PROBE, INFO, "%s turned off\n", + opt.name); + } + } else { + adapter->lli_vlan_pri = opt.def; + } + } + + { /* Flow Director packet buffer allocation */ + u32 fdir_pballoc_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "Flow Director packet buffer allocation", + .err = "using default of " + __MODULE_STRING(TXGBE_DEFAULT_FDIR_PBALLOC), + .def = TXGBE_DEFAULT_FDIR_PBALLOC, + .arg = {.r = {.min = TXGBE_FDIR_PBALLOC_64K, + .max = TXGBE_FDIR_PBALLOC_256K} } + }; + const char *pstring; + + if (num_FdirPballoc > bd) { + fdir_pballoc_mode = FdirPballoc[bd]; + txgbe_validate_option(&fdir_pballoc_mode, &opt); + switch (fdir_pballoc_mode) { + case TXGBE_FDIR_PBALLOC_256K: + adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_256K; + pstring = "256kB"; + break; + case TXGBE_FDIR_PBALLOC_128K: + adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_128K; + pstring = "128kB"; + break; + case TXGBE_FDIR_PBALLOC_64K: + default: + adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_64K; + pstring = "64kB"; + break; + } + DPRINTK(PROBE, INFO, "Flow Director will be allocated " + "%s of packet buffer\n", pstring); + } else { + adapter->fdir_pballoc = opt.def; + } + + } + { /* Flow Director ATR Tx sample packet rate */ + static struct txgbe_option opt = { + .type = range_option, + .name = "Software ATR Tx packet sample rate", + .err = "using default of " + __MODULE_STRING(TXGBE_DEFAULT_ATR_SAMPLE_RATE), + .def = TXGBE_DEFAULT_ATR_SAMPLE_RATE, + .arg = {.r = {.min = TXGBE_ATR_SAMPLE_RATE_OFF, + .max = TXGBE_MAX_ATR_SAMPLE_RATE} } + }; + static const char atr_string[] = + "ATR Tx Packet sample rate set to"; + + if (num_AtrSampleRate > bd) { + adapter->atr_sample_rate = AtrSampleRate[bd]; + + if (adapter->atr_sample_rate) { + txgbe_validate_option(&adapter->atr_sample_rate, + &opt); + DPRINTK(PROBE, INFO, "%s %d\n", atr_string, + adapter->atr_sample_rate); + } + } else { + adapter->atr_sample_rate = opt.def; + } + } + + { /* LRO - Set Large Receive Offload */ + struct txgbe_option opt = { + .type = enable_option, + .name = "LRO - Large Receive Offload", + .err = "defaulting to Disabled", + .def = OPTION_ENABLED + }; + struct net_device *netdev = adapter->netdev; + + if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE)) + opt.def = OPTION_DISABLED; + if (num_LRO > bd) { + u32 lro = LRO[bd]; + txgbe_validate_option(&lro, &opt); + if (lro) + netdev->features |= NETIF_F_LRO; + else + netdev->features &= ~NETIF_F_LRO; + } else if (opt.def == OPTION_ENABLED) { + netdev->features |= NETIF_F_LRO; + } else { + netdev->features &= ~NETIF_F_LRO; + } + + if ((netdev->features & NETIF_F_LRO) && + !(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE)) { + DPRINTK(PROBE, INFO, + "RSC is not supported on this " + "hardware. Disabling RSC.\n"); + netdev->features &= ~NETIF_F_LRO; + } + } + { /* + * allow_unsupported_sfp - Enable/Disable support for unsupported + * and untested SFP+ modules. + */ + struct txgbe_option opt = { + .type = enable_option, + .name = "allow_unsupported_sfp", + .err = "defaulting to Disabled", + .def = OPTION_DISABLED + }; + if (num_allow_unsupported_sfp > bd) { + u32 enable_unsupported_sfp = + allow_unsupported_sfp[bd]; + txgbe_validate_option(&enable_unsupported_sfp, &opt); + if (enable_unsupported_sfp) { + adapter->hw.allow_unsupported_sfp = true; + } else { + adapter->hw.allow_unsupported_sfp = false; + } + } else if (opt.def == OPTION_ENABLED) { + adapter->hw.allow_unsupported_sfp = true; + } else { + adapter->hw.allow_unsupported_sfp = false; + } + } + + { /* DMA Coalescing */ + struct txgbe_option opt = { + .type = range_option, + .name = "dmac_watchdog", + .err = "defaulting to 0 (disabled)", + .def = 0, + .arg = { .r = { .min = 41, .max = 10000 } }, + }; + const char *cmsg = "DMA coalescing not supported on this " + "hardware"; + + opt.err = cmsg; + opt.msg = cmsg; + opt.arg.r.min = 0; + opt.arg.r.max = 0; + + if (num_dmac_watchdog > bd) { + u32 dmac_wd = dmac_watchdog[bd]; + + txgbe_validate_option(&dmac_wd, &opt); + adapter->hw.mac.dmac_config.watchdog_timer = (u16)dmac_wd; + } else { + adapter->hw.mac.dmac_config.watchdog_timer = opt.def; + } + } + { /* VXLAN rx offload */ + struct txgbe_option opt = { + .type = range_option, + .name = "vxlan_rx", + .err = "defaulting to 1 (enabled)", + .def = 1, + .arg = { .r = { .min = 0, .max = 1 } }, + }; + const char *cmsg = "VXLAN rx offload not supported on this " + "hardware"; + const u32 flag = TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE; + + if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) { + opt.err = cmsg; + opt.msg = cmsg; + opt.def = 0; + opt.arg.r.max = 0; + } + if (num_vxlan_rx > bd) { + u32 enable_vxlan_rx = vxlan_rx[bd]; + + txgbe_validate_option(&enable_vxlan_rx, &opt); + if (enable_vxlan_rx) + adapter->flags |= flag; + else + adapter->flags &= ~flag; + } else if (opt.def) { + adapter->flags |= flag; + } else { + adapter->flags &= ~flag; + } + } + + { /* Rx buffer mode */ + u32 rx_buf_mode; + static struct txgbe_option opt = { + .type = range_option, + .name = "Rx buffer mode", + .err = "using default of " + __MODULE_STRING(TXGBE_DEFAULT_RXBUFMODE), + .def = TXGBE_DEFAULT_RXBUFMODE, + .arg = {.r = {.min = TXGBE_RXBUFMODE_NO_HEADER_SPLIT, + .max = TXGBE_RXBUFMODE_HEADER_SPLIT} } + + }; + + if (num_RxBufferMode > bd) { + rx_buf_mode = RxBufferMode[bd]; + txgbe_validate_option(&rx_buf_mode, &opt); + switch (rx_buf_mode) { + case TXGBE_RXBUFMODE_NO_HEADER_SPLIT: + *aflags &= ~TXGBE_FLAG_RX_HS_ENABLED; + break; + case TXGBE_RXBUFMODE_HEADER_SPLIT: + *aflags |= TXGBE_FLAG_RX_HS_ENABLED; + break; + default: + break; + } + } else { + *aflags &= ~TXGBE_FLAG_RX_HS_ENABLED; + } + + } + { /* Cloud Switch */ + struct txgbe_option opt = { + .type = range_option, + .name = "CloudSwitch", + .err = "defaulting to 0 (disabled)", + .def = 0, + .arg = { .r = { .min = 0, .max = 1 } }, + }; + + if (num_CloudSwitch > bd) { + u32 enable_cloudswitch = CloudSwitch[bd]; + + txgbe_validate_option(&enable_cloudswitch, &opt); + if (enable_cloudswitch) + adapter->flags |= + TXGBE_FLAG2_CLOUD_SWITCH_ENABLED; + else + adapter->flags &= + ~TXGBE_FLAG2_CLOUD_SWITCH_ENABLED; + } else if (opt.def) { + adapter->flags |= TXGBE_FLAG2_CLOUD_SWITCH_ENABLED; + } else { + adapter->flags &= ~TXGBE_FLAG2_CLOUD_SWITCH_ENABLED; + } + } +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c new file mode 100644 index 000000000000..2db6541f95a1 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c @@ -0,0 +1,1014 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_phy.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + +#include "txgbe_phy.h" +#include "txgbe_mtd.h" + +/** + * txgbe_check_reset_blocked - check status of MNG FW veto bit + * @hw: pointer to the hardware structure + * + * This function checks the MMNGC.MNG_VETO bit to see if there are + * any constraints on link from manageability. For MAC's that don't + * have this bit just return faluse since the link can not be blocked + * via this method. + **/ +s32 txgbe_check_reset_blocked(struct txgbe_hw *hw) +{ + u32 mmngc; + + DEBUGFUNC("\n"); + + mmngc = rd32(hw, TXGBE_MIS_ST); + if (mmngc & TXGBE_MIS_ST_MNG_VETO) { + ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, + "MNG_VETO bit detected.\n"); + return true; + } + + return false; +} + + +/** + * txgbe_get_phy_id - Get the phy type + * @hw: pointer to hardware structure + * + **/ +s32 txgbe_get_phy_id(struct txgbe_hw *hw) +{ + u32 status; + u16 phy_id_high = 0; + u16 phy_id_low = 0; + u8 numport, thisport; + DEBUGFUNC("\n"); + + status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, + TXGBE_MDIO_PMA_PMD_DEV_TYPE, + TXGBE_MDIO_PHY_ID_HIGH, &phy_id_high); + + if (status == 0) { + hw->phy.id = (u32)(phy_id_high << 16); + status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, + TXGBE_MDIO_PMA_PMD_DEV_TYPE, + TXGBE_MDIO_PHY_ID_LOW, &phy_id_low); + hw->phy.id |= (u32)(phy_id_low & TXGBE_PHY_REVISION_MASK); + } + + if (status == 0) { + status = mtdGetPhyRevision(&hw->phy_dev, hw->phy.addr, + (MTD_DEVICE_ID *)&hw->phy.revision, &numport, &thisport); + if (status == MTD_FAIL) { + ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE, + "Error in mtdGetPhyRevision()\n"); + } + } + return status; +} + +/** + * txgbe_get_phy_type_from_id - Get the phy type + * @phy_id: PHY ID information + * + **/ +enum txgbe_phy_type txgbe_get_phy_type_from_id(struct txgbe_hw *hw) +{ + enum txgbe_phy_type phy_type; + u16 ext_ability = 0; + + DEBUGFUNC("\n"); + + switch (hw->phy.id) { + case TN1010_PHY_ID: + phy_type = txgbe_phy_tn; + break; + case QT2022_PHY_ID: + phy_type = txgbe_phy_qt; + break; + case ATH_PHY_ID: + phy_type = txgbe_phy_nl; + break; + default: + phy_type = txgbe_phy_unknown; + break; + } + if (phy_type == txgbe_phy_unknown) { + mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, + TXGBE_MDIO_PMA_PMD_DEV_TYPE, + TXGBE_MDIO_PHY_EXT_ABILITY, &ext_ability); + + if (ext_ability & (TXGBE_MDIO_PHY_10GBASET_ABILITY | + TXGBE_MDIO_PHY_1000BASET_ABILITY)) + phy_type = txgbe_phy_cu_unknown; + else + phy_type = txgbe_phy_generic; + } + return phy_type; +} + +/** + * txgbe_reset_phy - Performs a PHY reset + * @hw: pointer to hardware structure + **/ +s32 txgbe_reset_phy(struct txgbe_hw *hw) +{ + s32 status = 0; + + DEBUGFUNC("\n"); + + + if (status != 0 || hw->phy.type == txgbe_phy_none) + goto out; + + /* Don't reset PHY if it's shut down due to overtemp. */ + if (!hw->phy.reset_if_overtemp && + (TXGBE_ERR_OVERTEMP == TCALL(hw, phy.ops.check_overtemp))) + goto out; + + /* Blocked by MNG FW so bail */ + txgbe_check_reset_blocked(hw); + if (((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) || + ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP)) + goto out; + + status = mtdHardwareReset(&hw->phy_dev, hw->phy.addr, 1000); + +out: + return status; +} + +/** + * txgbe_read_phy_mdi - Reads a value from a specified PHY register without + * the SWFW lock + * @hw: pointer to hardware structure + * @reg_addr: 32 bit address of PHY register to read + * @phy_data: Pointer to read data from PHY register + **/ +s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 *phy_data) +{ + u32 command; + s32 status = 0; + + /* setup and write the address cycle command */ + command = TXGBE_MSCA_RA(reg_addr) | + TXGBE_MSCA_PA(hw->phy.addr) | + TXGBE_MSCA_DA(device_type); + wr32(hw, TXGBE_MSCA, command); + + command = TXGBE_MSCC_CMD(TXGBE_MSCA_CMD_READ) | TXGBE_MSCC_BUSY; + wr32(hw, TXGBE_MSCC, command); + + /* wait to complete */ + status = po32m(hw, TXGBE_MSCC, + TXGBE_MSCC_BUSY, ~TXGBE_MSCC_BUSY, + TXGBE_MDIO_TIMEOUT, 10); + if (status != 0) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "PHY address command did not complete.\n"); + return TXGBE_ERR_PHY; + } + + /* read data from MSCC */ + *phy_data = 0xFFFF & rd32(hw, TXGBE_MSCC); + + return 0; +} + +/** + * txgbe_read_phy_reg - Reads a value from a specified PHY register + * using the SWFW lock - this function is needed in most cases + * @hw: pointer to hardware structure + * @reg_addr: 32 bit address of PHY register to read + * @phy_data: Pointer to read data from PHY register + **/ +s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 *phy_data) +{ + s32 status; + u32 gssr = hw->phy.phy_semaphore_mask; + + DEBUGFUNC("\n"); + + if (0 == TCALL(hw, mac.ops.acquire_swfw_sync, gssr)) { + status = txgbe_read_phy_reg_mdi(hw, reg_addr, device_type, + phy_data); + TCALL(hw, mac.ops.release_swfw_sync, gssr); + } else { + status = TXGBE_ERR_SWFW_SYNC; + } + + return status; +} + +/** + * txgbe_write_phy_reg_mdi - Writes a value to specified PHY register + * without SWFW lock + * @hw: pointer to hardware structure + * @reg_addr: 32 bit PHY register to write + * @device_type: 5 bit device type + * @phy_data: Data to write to the PHY register + **/ +s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 phy_data) +{ + u32 command; + s32 status = 0; + + /* setup and write the address cycle command */ + command = TXGBE_MSCA_RA(reg_addr) | + TXGBE_MSCA_PA(hw->phy.addr) | + TXGBE_MSCA_DA(device_type); + wr32(hw, TXGBE_MSCA, command); + + command = phy_data | TXGBE_MSCC_CMD(TXGBE_MSCA_CMD_WRITE) | + TXGBE_MSCC_BUSY; + wr32(hw, TXGBE_MSCC, command); + + /* wait to complete */ + status = po32m(hw, TXGBE_MSCC, + TXGBE_MSCC_BUSY, ~TXGBE_MSCC_BUSY, + TXGBE_MDIO_TIMEOUT, 10); + if (status != 0) { + ERROR_REPORT1(TXGBE_ERROR_POLLING, + "PHY address command did not complete.\n"); + return TXGBE_ERR_PHY; + } + + return 0; +} + +/** + * txgbe_write_phy_reg - Writes a value to specified PHY register + * using SWFW lock- this function is needed in most cases + * @hw: pointer to hardware structure + * @reg_addr: 32 bit PHY register to write + * @device_type: 5 bit device type + * @phy_data: Data to write to the PHY register + **/ +s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 phy_data) +{ + s32 status; + u32 gssr = hw->phy.phy_semaphore_mask; + + DEBUGFUNC("\n"); + + if (TCALL(hw, mac.ops.acquire_swfw_sync, gssr) == 0) { + status = txgbe_write_phy_reg_mdi(hw, reg_addr, device_type, + phy_data); + TCALL(hw, mac.ops.release_swfw_sync, gssr); + } else { + status = TXGBE_ERR_SWFW_SYNC; + } + + return status; +} + +MTD_STATUS txgbe_read_mdio( + MTD_DEV * dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 *value) +{ + struct txgbe_hw *hw = (struct txgbe_hw *)(dev->appData); + + if (hw->phy.addr != port) + return MTD_FAIL; + return txgbe_read_phy_reg(hw, reg, mmd, value); +} + +MTD_STATUS txgbe_write_mdio( + MTD_DEV * dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 value) +{ + struct txgbe_hw *hw = (struct txgbe_hw *)(dev->appData); + + if (hw->phy.addr != port) + return MTD_FAIL; + + return txgbe_write_phy_reg(hw, reg, mmd, value); +} + +/** + * txgbe_setup_phy_link - Set and restart auto-neg + * @hw: pointer to hardware structure + * + * Restart auto-negotiation and PHY and waits for completion. + **/ +u32 txgbe_setup_phy_link(struct txgbe_hw *hw, u32 speed_set, bool autoneg_wait_to_complete) +{ + u16 speed = MTD_ADV_NONE; + MTD_DEV_PTR devptr = &hw->phy_dev; + MTD_BOOL anDone = MTD_FALSE; + u16 port = hw->phy.addr; + + UNREFERENCED_PARAMETER(speed_set); + DEBUGFUNC("\n"); + + if (!autoneg_wait_to_complete) { + mtdAutonegIsSpeedDuplexResolutionDone(devptr, port, &anDone); + if (anDone) { + mtdGetAutonegSpeedDuplexResolution(devptr, port, &speed); + } + } else { + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) + speed |= MTD_SPEED_10GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) + speed |= MTD_SPEED_1GIG_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL) + speed |= MTD_SPEED_100M_FD; + if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL) + speed |= MTD_SPEED_10M_FD; + mtdEnableSpeeds(devptr, port, speed, MTD_TRUE); + + /* wait autoneg to be done */ + speed = MTD_ADV_NONE; + } + + switch (speed) { + case MTD_SPEED_10GIG_FD: + return TXGBE_LINK_SPEED_10GB_FULL; + case MTD_SPEED_1GIG_FD: + return TXGBE_LINK_SPEED_1GB_FULL; + case MTD_SPEED_100M_FD: + return TXGBE_LINK_SPEED_100_FULL; + case MTD_SPEED_10M_FD: + return TXGBE_LINK_SPEED_10_FULL; + default: + return TXGBE_LINK_SPEED_UNKNOWN; + } + +} + +/** + * txgbe_setup_phy_link_speed - Sets the auto advertised capabilities + * @hw: pointer to hardware structure + * @speed: new link speed + **/ +u32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete) +{ + + DEBUGFUNC("\n"); + + /* + * Clear autoneg_advertised and set new values based on input link + * speed. + */ + hw->phy.autoneg_advertised = 0; + + if (speed & TXGBE_LINK_SPEED_10GB_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL; + + if (speed & TXGBE_LINK_SPEED_1GB_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL; + + if (speed & TXGBE_LINK_SPEED_100_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_100_FULL; + + if (speed & TXGBE_LINK_SPEED_10_FULL) + hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10_FULL; + + /* Setup link based on the new speed settings */ + return txgbe_setup_phy_link(hw, speed, autoneg_wait_to_complete); +} + +/** + * txgbe_get_copper_link_capabilities - Determines link capabilities + * @hw: pointer to hardware structure + * @speed: pointer to link speed + * @autoneg: boolean auto-negotiation value + * + * Determines the supported link capabilities by reading the PHY auto + * negotiation register. + **/ +s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw, + u32 *speed, + bool *autoneg) +{ + s32 status; + u16 speed_ability; + + DEBUGFUNC("\n"); + + *speed = 0; + *autoneg = true; + + status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, + TXGBE_MDIO_PMA_PMD_DEV_TYPE, + TXGBE_MDIO_PHY_SPEED_ABILITY, &speed_ability); + + if (status == 0) { + if (speed_ability & TXGBE_MDIO_PHY_SPEED_10G) + *speed |= TXGBE_LINK_SPEED_10GB_FULL; + if (speed_ability & TXGBE_MDIO_PHY_SPEED_1G) + *speed |= TXGBE_LINK_SPEED_1GB_FULL; + if (speed_ability & TXGBE_MDIO_PHY_SPEED_100M) + *speed |= TXGBE_LINK_SPEED_100_FULL; + if (speed_ability & TXGBE_MDIO_PHY_SPEED_10M) + *speed |= TXGBE_LINK_SPEED_10_FULL; + } + + return status; +} + +/** + * txgbe_identify_module - Identifies module type + * @hw: pointer to hardware structure + * + * Determines HW type and calls appropriate function. + **/ +s32 txgbe_identify_module(struct txgbe_hw *hw) +{ + s32 status = TXGBE_ERR_SFP_NOT_PRESENT; + + DEBUGFUNC("\n"); + + switch (TCALL(hw, mac.ops.get_media_type)) { + case txgbe_media_type_fiber: + status = txgbe_identify_sfp_module(hw); + break; + + default: + hw->phy.sfp_type = txgbe_sfp_type_not_present; + status = TXGBE_ERR_SFP_NOT_PRESENT; + break; + } + + return status; +} + +/** + * txgbe_identify_sfp_module - Identifies SFP modules + * @hw: pointer to hardware structure + * + * Searches for and identifies the SFP module and assigns appropriate PHY type. + **/ +s32 txgbe_identify_sfp_module(struct txgbe_hw *hw) +{ + s32 status = TXGBE_ERR_PHY_ADDR_INVALID; + u32 vendor_oui = 0; + enum txgbe_sfp_type stored_sfp_type = hw->phy.sfp_type; + u8 identifier = 0; + u8 comp_codes_1g = 0; + u8 comp_codes_10g = 0; + u8 oui_bytes[3] = {0, 0, 0}; + u8 cable_tech = 0; + u8 cable_spec = 0; + + DEBUGFUNC("\n"); + + if (TCALL(hw, mac.ops.get_media_type) != txgbe_media_type_fiber) { + hw->phy.sfp_type = txgbe_sfp_type_not_present; + status = TXGBE_ERR_SFP_NOT_PRESENT; + goto out; + } + + /* LAN ID is needed for I2C access */ + txgbe_init_i2c(hw); + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_IDENTIFIER, + &identifier); + + if (status != 0) + goto err_read_i2c_eeprom; + + if (identifier != TXGBE_SFF_IDENTIFIER_SFP) { + hw->phy.type = txgbe_phy_sfp_unsupported; + status = TXGBE_ERR_SFP_NOT_SUPPORTED; + } else { + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_1GBE_COMP_CODES, + &comp_codes_1g); + + if (status != 0) + goto err_read_i2c_eeprom; + + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_10GBE_COMP_CODES, + &comp_codes_10g); + + if (status != 0) + goto err_read_i2c_eeprom; + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_CABLE_TECHNOLOGY, + &cable_tech); + + if (status != 0) + goto err_read_i2c_eeprom; + + /* ID Module + * ========= + * 0 SFP_DA_CU + * 1 SFP_SR + * 2 SFP_LR + * 3 SFP_DA_CORE0 + * 4 SFP_DA_CORE1 + * 5 SFP_SR/LR_CORE0 + * 6 SFP_SR/LR_CORE1 + * 7 SFP_act_lmt_DA_CORE0 + * 8 SFP_act_lmt_DA_CORE1 + * 9 SFP_1g_cu_CORE0 + * 10 SFP_1g_cu_CORE1 + * 11 SFP_1g_sx_CORE0 + * 12 SFP_1g_sx_CORE1 + */ + { + if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_da_cu_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_da_cu_core1; + } else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE) { + TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_CABLE_SPEC_COMP, + &cable_spec); + if (cable_spec & + TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_da_act_lmt_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_da_act_lmt_core1; + } else { + hw->phy.sfp_type = + txgbe_sfp_type_unknown; + } + } else if (comp_codes_10g & + (TXGBE_SFF_10GBASESR_CAPABLE | + TXGBE_SFF_10GBASELR_CAPABLE)) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_srlr_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_srlr_core1; + } else if (comp_codes_1g & TXGBE_SFF_1GBASET_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_1g_cu_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_1g_cu_core1; + } else if (comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_1g_sx_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_1g_sx_core1; + } else if (comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + txgbe_sfp_type_1g_lx_core0; + else + hw->phy.sfp_type = + txgbe_sfp_type_1g_lx_core1; + } else { + hw->phy.sfp_type = txgbe_sfp_type_unknown; + } + } + + if (hw->phy.sfp_type != stored_sfp_type) + hw->phy.sfp_setup_needed = true; + + /* Determine if the SFP+ PHY is dual speed or not. */ + hw->phy.multispeed_fiber = false; + if (((comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) && + (comp_codes_10g & TXGBE_SFF_10GBASESR_CAPABLE)) || + ((comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) && + (comp_codes_10g & TXGBE_SFF_10GBASELR_CAPABLE))) + hw->phy.multispeed_fiber = true; + + /* Determine PHY vendor */ + if (hw->phy.type != txgbe_phy_nl) { + hw->phy.id = identifier; + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_VENDOR_OUI_BYTE0, + &oui_bytes[0]); + + if (status != 0) + goto err_read_i2c_eeprom; + + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_VENDOR_OUI_BYTE1, + &oui_bytes[1]); + + if (status != 0) + goto err_read_i2c_eeprom; + + status = TCALL(hw, phy.ops.read_i2c_eeprom, + TXGBE_SFF_VENDOR_OUI_BYTE2, + &oui_bytes[2]); + + if (status != 0) + goto err_read_i2c_eeprom; + + vendor_oui = + ((oui_bytes[0] << TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT) | + (oui_bytes[1] << TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT) | + (oui_bytes[2] << TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT)); + + switch (vendor_oui) { + case TXGBE_SFF_VENDOR_OUI_TYCO: + if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE) + hw->phy.type = + txgbe_phy_sfp_passive_tyco; + break; + case TXGBE_SFF_VENDOR_OUI_FTL: + if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE) + hw->phy.type = txgbe_phy_sfp_ftl_active; + else + hw->phy.type = txgbe_phy_sfp_ftl; + break; + case TXGBE_SFF_VENDOR_OUI_AVAGO: + hw->phy.type = txgbe_phy_sfp_avago; + break; + case TXGBE_SFF_VENDOR_OUI_INTEL: + hw->phy.type = txgbe_phy_sfp_intel; + break; + default: + if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE) + hw->phy.type = + txgbe_phy_sfp_passive_unknown; + else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE) + hw->phy.type = + txgbe_phy_sfp_active_unknown; + else + hw->phy.type = txgbe_phy_sfp_unknown; + break; + } + } + + /* Allow any DA cable vendor */ + if (cable_tech & (TXGBE_SFF_DA_PASSIVE_CABLE | + TXGBE_SFF_DA_ACTIVE_CABLE)) { + status = 0; + goto out; + } + + /* Verify supported 1G SFP modules */ + if (comp_codes_10g == 0 && + !(hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 || + hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 || + hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 || + hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1)) { + hw->phy.type = txgbe_phy_sfp_unsupported; + status = TXGBE_ERR_SFP_NOT_SUPPORTED; + goto out; + } + } + +out: + return status; + +err_read_i2c_eeprom: + hw->phy.sfp_type = txgbe_sfp_type_not_present; + if (hw->phy.type != txgbe_phy_nl) { + hw->phy.id = 0; + hw->phy.type = txgbe_phy_unknown; + } + return TXGBE_ERR_SFP_NOT_PRESENT; +} + +s32 txgbe_init_i2c(struct txgbe_hw *hw) +{ + + wr32(hw, TXGBE_I2C_ENABLE, 0); + + wr32(hw, TXGBE_I2C_CON, + (TXGBE_I2C_CON_MASTER_MODE | + TXGBE_I2C_CON_SPEED(1) | + TXGBE_I2C_CON_RESTART_EN | + TXGBE_I2C_CON_SLAVE_DISABLE)); + /* Default addr is 0xA0 ,bit 0 is configure for read/write! */ + wr32(hw, TXGBE_I2C_TAR, TXGBE_I2C_SLAVE_ADDR); + wr32(hw, TXGBE_I2C_SS_SCL_HCNT, 600); + wr32(hw, TXGBE_I2C_SS_SCL_LCNT, 600); + wr32(hw, TXGBE_I2C_RX_TL, 0); /* 1byte for rx full signal */ + wr32(hw, TXGBE_I2C_TX_TL, 4); + wr32(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT, 0xFFFFFF); + wr32(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT, 0xFFFFFF); + + wr32(hw, TXGBE_I2C_INTR_MASK, 0); + wr32(hw, TXGBE_I2C_ENABLE, 1); + return 0; +} + +s32 txgbe_clear_i2c(struct txgbe_hw *hw) +{ + s32 status = 0; + + /* wait for completion */ + status = po32m(hw, TXGBE_I2C_STATUS, + TXGBE_I2C_STATUS_MST_ACTIVITY, ~TXGBE_I2C_STATUS_MST_ACTIVITY, + TXGBE_I2C_TIMEOUT, 10); + if (status != 0) + goto out; + + wr32(hw, TXGBE_I2C_ENABLE, 0); + +out: + return status; +} + +/** + * txgbe_read_i2c_eeprom - Reads 8 bit EEPROM word over I2C interface + * @hw: pointer to hardware structure + * @byte_offset: EEPROM byte offset to read + * @eeprom_data: value read + * + * Performs byte read operation to SFP module's EEPROM over I2C interface. + **/ +s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, + u8 *eeprom_data) +{ + DEBUGFUNC("\n"); + + return TCALL(hw, phy.ops.read_i2c_byte, byte_offset, + TXGBE_I2C_EEPROM_DEV_ADDR, + eeprom_data); +} + +/** + * txgbe_read_i2c_sff8472 - Reads 8 bit word over I2C interface + * @hw: pointer to hardware structure + * @byte_offset: byte offset at address 0xA2 + * @eeprom_data: value read + * + * Performs byte read operation to SFP module's SFF-8472 data over I2C + **/ +s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data) +{ + return TCALL(hw, phy.ops.read_i2c_byte, byte_offset, + TXGBE_I2C_EEPROM_DEV_ADDR2, + sff8472_data); +} + +/** + * txgbe_write_i2c_eeprom - Writes 8 bit EEPROM word over I2C interface + * @hw: pointer to hardware structure + * @byte_offset: EEPROM byte offset to write + * @eeprom_data: value to write + * + * Performs byte write operation to SFP module's EEPROM over I2C interface. + **/ +s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, + u8 eeprom_data) +{ + DEBUGFUNC("\n"); + + return TCALL(hw, phy.ops.write_i2c_byte, byte_offset, + TXGBE_I2C_EEPROM_DEV_ADDR, + eeprom_data); +} + +/** + * txgbe_read_i2c_byte_int - Reads 8 bit word over I2C + * @hw: pointer to hardware structure + * @byte_offset: byte offset to read + * @data: value read + * @lock: true if to take and release semaphore + * + * Performs byte read operation to SFP module's EEPROM over I2C interface at + * a specified device address. + **/ +STATIC s32 txgbe_read_i2c_byte_int(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 *data, bool lock) +{ + s32 status = 0; + u32 swfw_mask = hw->phy.phy_semaphore_mask; + + UNREFERENCED_PARAMETER(dev_addr); + + if (lock && 0 != TCALL(hw, mac.ops.acquire_swfw_sync, swfw_mask)) + return TXGBE_ERR_SWFW_SYNC; + + /* wait tx empty */ + status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT, + TXGBE_I2C_INTR_STAT_TX_EMPTY, TXGBE_I2C_INTR_STAT_TX_EMPTY, + TXGBE_I2C_TIMEOUT, 10); + if (status != 0) + goto out; + + /* read data */ + wr32(hw, TXGBE_I2C_DATA_CMD, + byte_offset | TXGBE_I2C_DATA_CMD_STOP); + wr32(hw, TXGBE_I2C_DATA_CMD, TXGBE_I2C_DATA_CMD_READ); + + /* wait for read complete */ + status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT, + TXGBE_I2C_INTR_STAT_RX_FULL, TXGBE_I2C_INTR_STAT_RX_FULL, + TXGBE_I2C_TIMEOUT, 10); + if (status != 0) + goto out; + + *data = 0xFF & rd32(hw, TXGBE_I2C_DATA_CMD); + +out: + if (lock) + TCALL(hw, mac.ops.release_swfw_sync, swfw_mask); + return status; +} + +/** + * txgbe_switch_i2c_slave_addr - Switch I2C slave address + * @hw: pointer to hardware structure + * @dev_addr: slave addr to switch + * + **/ +s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr) +{ + wr32(hw, TXGBE_I2C_ENABLE, 0); + wr32(hw, TXGBE_I2C_TAR, dev_addr >> 1); + wr32(hw, TXGBE_I2C_ENABLE, 1); + return 0; +} + + +/** + * txgbe_read_i2c_byte - Reads 8 bit word over I2C + * @hw: pointer to hardware structure + * @byte_offset: byte offset to read + * @data: value read + * + * Performs byte read operation to SFP module's EEPROM over I2C interface at + * a specified device address. + **/ +s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 *data) +{ + txgbe_switch_i2c_slave_addr(hw, dev_addr); + + return txgbe_read_i2c_byte_int(hw, byte_offset, dev_addr, + data, true); +} + +/** + * txgbe_write_i2c_byte_int - Writes 8 bit word over I2C + * @hw: pointer to hardware structure + * @byte_offset: byte offset to write + * @data: value to write + * @lock: true if to take and release semaphore + * + * Performs byte write operation to SFP module's EEPROM over I2C interface at + * a specified device address. + **/ +STATIC s32 txgbe_write_i2c_byte_int(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 data, bool lock) +{ + s32 status = 0; + u32 swfw_mask = hw->phy.phy_semaphore_mask; + + UNREFERENCED_PARAMETER(dev_addr); + + if (lock && 0 != TCALL(hw, mac.ops.acquire_swfw_sync, swfw_mask)) + return TXGBE_ERR_SWFW_SYNC; + + /* wait tx empty */ + status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT, + TXGBE_I2C_INTR_STAT_TX_EMPTY, TXGBE_I2C_INTR_STAT_TX_EMPTY, + TXGBE_I2C_TIMEOUT, 10); + if (status != 0) + goto out; + + wr32(hw, TXGBE_I2C_DATA_CMD, + byte_offset | TXGBE_I2C_DATA_CMD_STOP); + wr32(hw, TXGBE_I2C_DATA_CMD, + data | TXGBE_I2C_DATA_CMD_WRITE); + + /* wait for write complete */ + status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT, + TXGBE_I2C_INTR_STAT_RX_FULL, TXGBE_I2C_INTR_STAT_RX_FULL, + TXGBE_I2C_TIMEOUT, 10); + +out: + if (lock) + TCALL(hw, mac.ops.release_swfw_sync, swfw_mask); + + return status; +} + +/** + * txgbe_write_i2c_byte - Writes 8 bit word over I2C + * @hw: pointer to hardware structure + * @byte_offset: byte offset to write + * @data: value to write + * + * Performs byte write operation to SFP module's EEPROM over I2C interface at + * a specified device address. + **/ +s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 data) +{ + return txgbe_write_i2c_byte_int(hw, byte_offset, dev_addr, + data, true); +} + +/** + * txgbe_tn_check_overtemp - Checks if an overtemp occurred. + * @hw: pointer to hardware structure + * + * Checks if the LASI temp alarm status was triggered due to overtemp + **/ +s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw) +{ + s32 status = 0; + u32 ts_state; + + DEBUGFUNC("\n"); + + /* Check that the LASI temp alarm status was triggered */ + ts_state = rd32(hw, TXGBE_TS_ALARM_ST); + + if (ts_state & TXGBE_TS_ALARM_ST_DALARM) + status = TXGBE_ERR_UNDERTEMP; + else if (ts_state & TXGBE_TS_ALARM_ST_ALARM) + status = TXGBE_ERR_OVERTEMP; + + return status; +} + + +s32 txgbe_init_external_phy(struct txgbe_hw *hw) +{ + s32 status = 0; + + MTD_DEV_PTR devptr = &(hw->phy_dev); + + hw->phy.addr = 0; + + devptr->appData = hw; + status = mtdLoadDriver(txgbe_read_mdio, + txgbe_write_mdio, + MTD_FALSE, + NULL, + NULL, + NULL, + NULL, + hw->phy.addr, + devptr); + if (status != 0) { + ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE, + "External PHY initilization failed.\n"); + return TXGBE_ERR_PHY; + } + + return status; +} + +s32 txgbe_set_phy_pause_advertisement(struct txgbe_hw *hw, u32 pause_bit) +{ + return mtdSetPauseAdvertisement(&hw->phy_dev, hw->phy.addr, + (pause_bit>>10)&0x3, MTD_FALSE); +} + +s32 txgbe_get_phy_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr, + TXGBE_MDIO_AUTO_NEG_DEV_TYPE, + TXGBE_MDIO_AUTO_NEG_ADVT, &value); + *pause_bit = (u8)((value>>10)&0x3); + return status; + +} + +s32 txgbe_get_lp_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit) +{ + return mtdGetLPAdvertisedPause(&hw->phy_dev, hw->phy.addr, pause_bit); +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h new file mode 100644 index 000000000000..f033b43cf4fe --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h @@ -0,0 +1,190 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_phy.h, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#ifndef _TXGBE_PHY_H_ +#define _TXGBE_PHY_H_ + +#include "txgbe.h" + +#define TXGBE_I2C_EEPROM_DEV_ADDR 0xA0 +#define TXGBE_I2C_EEPROM_DEV_ADDR2 0xA2 +#define TXGBE_I2C_EEPROM_BANK_LEN 0xFF + +/* EEPROM byte offsets */ +#define TXGBE_SFF_IDENTIFIER 0x0 +#define TXGBE_SFF_IDENTIFIER_SFP 0x3 +#define TXGBE_SFF_VENDOR_OUI_BYTE0 0x25 +#define TXGBE_SFF_VENDOR_OUI_BYTE1 0x26 +#define TXGBE_SFF_VENDOR_OUI_BYTE2 0x27 +#define TXGBE_SFF_1GBE_COMP_CODES 0x6 +#define TXGBE_SFF_10GBE_COMP_CODES 0x3 +#define TXGBE_SFF_CABLE_TECHNOLOGY 0x8 +#define TXGBE_SFF_CABLE_SPEC_COMP 0x3C +#define TXGBE_SFF_SFF_8472_SWAP 0x5C +#define TXGBE_SFF_SFF_8472_COMP 0x5E +#define TXGBE_SFF_SFF_8472_OSCB 0x6E +#define TXGBE_SFF_SFF_8472_ESCB 0x76 +#define TXGBE_SFF_IDENTIFIER_QSFP_PLUS 0xD +#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE0 0xA5 +#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE1 0xA6 +#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE2 0xA7 +#define TXGBE_SFF_QSFP_CONNECTOR 0x82 +#define TXGBE_SFF_QSFP_10GBE_COMP 0x83 +#define TXGBE_SFF_QSFP_1GBE_COMP 0x86 +#define TXGBE_SFF_QSFP_CABLE_LENGTH 0x92 +#define TXGBE_SFF_QSFP_DEVICE_TECH 0x93 + +/* Bitmasks */ +#define TXGBE_SFF_DA_PASSIVE_CABLE 0x4 +#define TXGBE_SFF_DA_ACTIVE_CABLE 0x8 +#define TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING 0x4 +#define TXGBE_SFF_1GBASESX_CAPABLE 0x1 +#define TXGBE_SFF_1GBASELX_CAPABLE 0x2 +#define TXGBE_SFF_1GBASET_CAPABLE 0x8 +#define TXGBE_SFF_10GBASESR_CAPABLE 0x10 +#define TXGBE_SFF_10GBASELR_CAPABLE 0x20 +#define TXGBE_SFF_SOFT_RS_SELECT_MASK 0x8 +#define TXGBE_SFF_SOFT_RS_SELECT_10G 0x8 +#define TXGBE_SFF_SOFT_RS_SELECT_1G 0x0 +#define TXGBE_SFF_ADDRESSING_MODE 0x4 +#define TXGBE_SFF_QSFP_DA_ACTIVE_CABLE 0x1 +#define TXGBE_SFF_QSFP_DA_PASSIVE_CABLE 0x8 +#define TXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE 0x23 +#define TXGBE_SFF_QSFP_TRANSMITER_850NM_VCSEL 0x0 +#define TXGBE_I2C_EEPROM_READ_MASK 0x100 +#define TXGBE_I2C_EEPROM_STATUS_MASK 0x3 +#define TXGBE_I2C_EEPROM_STATUS_NO_OPERATION 0x0 +#define TXGBE_I2C_EEPROM_STATUS_PASS 0x1 +#define TXGBE_I2C_EEPROM_STATUS_FAIL 0x2 +#define TXGBE_I2C_EEPROM_STATUS_IN_PROGRESS 0x3 + +#define TXGBE_CS4227 0xBE /* CS4227 address */ +#define TXGBE_CS4227_GLOBAL_ID_LSB 0 +#define TXGBE_CS4227_SCRATCH 2 +#define TXGBE_CS4227_GLOBAL_ID_VALUE 0x03E5 +#define TXGBE_CS4227_SCRATCH_VALUE 0x5aa5 +#define TXGBE_CS4227_RETRIES 5 +#define TXGBE_CS4227_LINE_SPARE22_MSB 0x12AD /* Reg to program speed */ +#define TXGBE_CS4227_LINE_SPARE24_LSB 0x12B0 /* Reg to program EDC */ +#define TXGBE_CS4227_HOST_SPARE22_MSB 0x1AAD /* Reg to program speed */ +#define TXGBE_CS4227_HOST_SPARE24_LSB 0x1AB0 /* Reg to program EDC */ +#define TXGBE_CS4227_EDC_MODE_CX1 0x0002 +#define TXGBE_CS4227_EDC_MODE_SR 0x0004 +#define TXGBE_CS4227_RESET_HOLD 500 /* microseconds */ +#define TXGBE_CS4227_RESET_DELAY 500 /* milliseconds */ +#define TXGBE_CS4227_CHECK_DELAY 30 /* milliseconds */ +#define TXGBE_PE 0xE0 /* Port expander address */ +#define TXGBE_PE_OUTPUT 1 /* Output register offset */ +#define TXGBE_PE_CONFIG 3 /* Config register offset */ +#define TXGBE_PE_BIT1 (1 << 1) + +/* Flow control defines */ +#define TXGBE_TAF_SYM_PAUSE (0x1) +#define TXGBE_TAF_ASM_PAUSE (0x2) + +/* Bit-shift macros */ +#define TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT 24 +#define TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT 16 +#define TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT 8 + +/* Vendor OUIs: format of OUI is 0x[byte0][byte1][byte2][00] */ +#define TXGBE_SFF_VENDOR_OUI_TYCO 0x00407600 +#define TXGBE_SFF_VENDOR_OUI_FTL 0x00906500 +#define TXGBE_SFF_VENDOR_OUI_AVAGO 0x00176A00 +#define TXGBE_SFF_VENDOR_OUI_INTEL 0x001B2100 + +/* I2C SDA and SCL timing parameters for standard mode */ +#define TXGBE_I2C_T_HD_STA 4 +#define TXGBE_I2C_T_LOW 5 +#define TXGBE_I2C_T_HIGH 4 +#define TXGBE_I2C_T_SU_STA 5 +#define TXGBE_I2C_T_HD_DATA 5 +#define TXGBE_I2C_T_SU_DATA 1 +#define TXGBE_I2C_T_RISE 1 +#define TXGBE_I2C_T_FALL 1 +#define TXGBE_I2C_T_SU_STO 4 +#define TXGBE_I2C_T_BUF 5 + +/* SFP+ SFF-8472 Compliance */ +#define TXGBE_SFF_SFF_8472_UNSUP 0x00 + + +enum txgbe_phy_type txgbe_get_phy_type_from_id(struct txgbe_hw *hw); +s32 txgbe_get_phy_id(struct txgbe_hw *hw); +s32 txgbe_reset_phy(struct txgbe_hw *hw); +s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 *phy_data); +s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 phy_data); +s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 *phy_data); +s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 phy_data); +u32 txgbe_setup_phy_link(struct txgbe_hw *hw, u32 speed_set, bool autoneg_wait_to_complete); +u32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw, + u32 speed, + bool autoneg_wait_to_complete); +s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw, + u32 *speed, + bool *autoneg); +s32 txgbe_check_reset_blocked(struct txgbe_hw *hw); + +s32 txgbe_identify_module(struct txgbe_hw *hw); +s32 txgbe_identify_sfp_module(struct txgbe_hw *hw); +s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw); +s32 txgbe_init_i2c(struct txgbe_hw *hw); +s32 txgbe_clear_i2c(struct txgbe_hw *hw); +s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr); +s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 *data); + +s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset, + u8 dev_addr, u8 data); +s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, + u8 *eeprom_data); +s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset, + u8 eeprom_data); +s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data); +s32 txgbe_init_external_phy(struct txgbe_hw *hw); +s32 txgbe_set_phy_pause_advertisement(struct txgbe_hw *hw, u32 pause_bit); +s32 txgbe_get_phy_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit); +s32 txgbe_get_lp_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit); + +MTD_STATUS txgbe_read_mdio( + MTD_DEV * dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 *value); + +MTD_STATUS txgbe_write_mdio( + MTD_DEV * dev, + MTD_U16 port, + MTD_U16 mmd, + MTD_U16 reg, + MTD_U16 value); + + +#endif /* _TXGBE_PHY_H_ */ diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c b/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c new file mode 100644 index 000000000000..4a614a550e47 --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c @@ -0,0 +1,884 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_ptp.c, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#include "txgbe.h" +#include <linux/ptp_classify.h> + +/* + * SYSTIME is defined by a fixed point system which allows the user to + * define the scale counter increment value at every level change of + * the oscillator driving SYSTIME value. The time unit is determined by + * the clock frequency of the oscillator and TIMINCA register. + * The cyclecounter and timecounter structures are used to to convert + * the scale counter into nanoseconds. SYSTIME registers need to be converted + * to ns values by use of only a right shift. + * The following math determines the largest incvalue that will fit into + * the available bits in the TIMINCA register: + * Period * [ 2 ^ ( MaxWidth - PeriodWidth ) ] + * PeriodWidth: Number of bits to store the clock period + * MaxWidth: The maximum width value of the TIMINCA register + * Period: The clock period for the oscillator, which changes based on the link + * speed: + * At 10Gb link or no link, the period is 6.4 ns. + * At 1Gb link, the period is multiplied by 10. (64ns) + * At 100Mb link, the period is multiplied by 100. (640ns) + * round(): discard the fractional portion of the calculation + * + * The calculated value allows us to right shift the SYSTIME register + * value in order to quickly convert it into a nanosecond clock, + * while allowing for the maximum possible adjustment value. + * + * LinkSpeed ClockFreq ClockPeriod TIMINCA:IV + * 10000Mbps 156.25MHz 6.4*10^-9 0xCCCCCC(0xFFFFF/ns) + * 1000 Mbps 62.5 MHz 16 *10^-9 0x800000(0x7FFFF/ns) + * 100 Mbps 6.25 MHz 160*10^-9 0xA00000(0xFFFF/ns) + * 10 Mbps 0.625 MHz 1600*10^-9 0xC7F380(0xFFF/ns) + * FPGA 31.25 MHz 32 *10^-9 0x800000(0x3FFFF/ns) + * + * These diagrams are only for the 10Gb link period + * + * +--------------+ +--------------+ + * | 32 | | 8 | 3 | 20 | + * *--------------+ +--------------+ + * ________ 43 bits ______/ fract + * + * The 43 bit SYSTIME overflows every + * 2^43 * 10^-9 / 3600 = 2.4 hours + */ +#define TXGBE_INCVAL_10GB 0xCCCCCC +#define TXGBE_INCVAL_1GB 0x800000 +#define TXGBE_INCVAL_100 0xA00000 +#define TXGBE_INCVAL_10 0xC7F380 +#define TXGBE_INCVAL_FPGA 0x800000 + +#define TXGBE_INCVAL_SHIFT_10GB 20 +#define TXGBE_INCVAL_SHIFT_1GB 18 +#define TXGBE_INCVAL_SHIFT_100 15 +#define TXGBE_INCVAL_SHIFT_10 12 +#define TXGBE_INCVAL_SHIFT_FPGA 17 + +#define TXGBE_OVERFLOW_PERIOD (HZ * 30) +#define TXGBE_PTP_TX_TIMEOUT (HZ) + +/** + * txgbe_ptp_read - read raw cycle counter (to be used by time counter) + * @hw_cc: the cyclecounter structure + * + * this function reads the cyclecounter registers and is called by the + * cyclecounter structure used to construct a ns counter from the + * arbitrary fixed point registers + */ +static u64 txgbe_ptp_read(const struct cyclecounter *hw_cc) +{ + struct txgbe_adapter *adapter = + container_of(hw_cc, struct txgbe_adapter, hw_cc); + struct txgbe_hw *hw = &adapter->hw; + u64 stamp = 0; + + stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIML); + stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIMH) << 32; + + return stamp; +} + +/** + * txgbe_ptp_convert_to_hwtstamp - convert register value to hw timestamp + * @adapter: private adapter structure + * @hwtstamp: stack timestamp structure + * @systim: unsigned 64bit system time value + * + * We need to convert the adapter's RX/TXSTMP registers into a hwtstamp value + * which can be used by the stack's ptp functions. + * + * The lock is used to protect consistency of the cyclecounter and the SYSTIME + * registers. However, it does not need to protect against the Rx or Tx + * timestamp registers, as there can't be a new timestamp until the old one is + * unlatched by reading. + * + * In addition to the timestamp in hardware, some controllers need a software + * overflow cyclecounter, and this function takes this into account as well. + **/ +static void txgbe_ptp_convert_to_hwtstamp(struct txgbe_adapter *adapter, + struct skb_shared_hwtstamps *hwtstamp, + u64 timestamp) +{ + unsigned long flags; + u64 ns; + + memset(hwtstamp, 0, sizeof(*hwtstamp)); + + spin_lock_irqsave(&adapter->tmreg_lock, flags); + ns = timecounter_cyc2time(&adapter->hw_tc, timestamp); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + + hwtstamp->hwtstamp = ns_to_ktime(ns); +} + +/** + * txgbe_ptp_adjfreq + * @ptp: the ptp clock structure + * @ppb: parts per billion adjustment from base + * + * adjust the frequency of the ptp cycle counter by the + * indicated ppb from the base frequency. + */ +static int txgbe_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb) +{ + struct txgbe_adapter *adapter = + container_of(ptp, struct txgbe_adapter, ptp_caps); + struct txgbe_hw *hw = &adapter->hw; + u64 freq, incval; + u32 diff; + int neg_adj = 0; + + if (ppb < 0) { + neg_adj = 1; + ppb = -ppb; + } + + smp_mb(); + incval = READ_ONCE(adapter->base_incval); + + freq = incval; + freq *= ppb; + diff = div_u64(freq, 1000000000ULL); + + incval = neg_adj ? (incval - diff) : (incval + diff); + + if (incval > TXGBE_TSC_1588_INC_IV(~0)) + e_dev_warn("PTP ppb adjusted SYSTIME rate overflowed!\n"); + wr32(hw, TXGBE_TSC_1588_INC, + TXGBE_TSC_1588_INC_IVP(incval, 2)); + + return 0; +} + + +/** + * txgbe_ptp_adjtime + * @ptp: the ptp clock structure + * @delta: offset to adjust the cycle counter by ns + * + * adjust the timer by resetting the timecounter structure. + */ +static int txgbe_ptp_adjtime(struct ptp_clock_info *ptp, + s64 delta) +{ + struct txgbe_adapter *adapter = + container_of(ptp, struct txgbe_adapter, ptp_caps); + unsigned long flags; + + spin_lock_irqsave(&adapter->tmreg_lock, flags); + timecounter_adjtime(&adapter->hw_tc, delta); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + + return 0; +} + +/** + * txgbe_ptp_gettime64 + * @ptp: the ptp clock structure + * @ts: timespec64 structure to hold the current time value + * + * read the timecounter and return the correct value on ns, + * after converting it into a struct timespec64. + */ +static int txgbe_ptp_gettime64(struct ptp_clock_info *ptp, + struct timespec64 *ts) +{ + struct txgbe_adapter *adapter = + container_of(ptp, struct txgbe_adapter, ptp_caps); + unsigned long flags; + u64 ns; + + spin_lock_irqsave(&adapter->tmreg_lock, flags); + ns = timecounter_read(&adapter->hw_tc); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + + *ts = ns_to_timespec64(ns); + + return 0; +} + +/** + * txgbe_ptp_settime64 + * @ptp: the ptp clock structure + * @ts: the timespec64 containing the new time for the cycle counter + * + * reset the timecounter to use a new base value instead of the kernel + * wall timer value. + */ +static int txgbe_ptp_settime64(struct ptp_clock_info *ptp, + const struct timespec64 *ts) +{ + struct txgbe_adapter *adapter = + container_of(ptp, struct txgbe_adapter, ptp_caps); + u64 ns; + unsigned long flags; + + ns = timespec64_to_ns(ts); + + /* reset the timecounter */ + spin_lock_irqsave(&adapter->tmreg_lock, flags); + timecounter_init(&adapter->hw_tc, &adapter->hw_cc, ns); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + + return 0; +} + +/** + * txgbe_ptp_feature_enable + * @ptp: the ptp clock structure + * @rq: the requested feature to change + * @on: whether to enable or disable the feature + * + * enable (or disable) ancillary features of the phc subsystem. + * our driver only supports the PPS feature on the X540 + */ +static int txgbe_ptp_feature_enable(struct ptp_clock_info *ptp, + struct ptp_clock_request *rq, int on) +{ + return -ENOTSUPP; +} + +/** + * txgbe_ptp_check_pps_event + * @adapter: the private adapter structure + * @eicr: the interrupt cause register value + * + * This function is called by the interrupt routine when checking for + * interrupts. It will check and handle a pps event. + */ +void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter) +{ + struct ptp_clock_event event; + + event.type = PTP_CLOCK_PPS; + + /* this check is necessary in case the interrupt was enabled via some + * alternative means (ex. debug_fs). Better to check here than + * everywhere that calls this function. + */ + if (!adapter->ptp_clock) + return; + + /* we don't config PPS on SDP yet, so just return. + * ptp_clock_event(adapter->ptp_clock, &event); + */ +} + +/** + * txgbe_ptp_overflow_check - watchdog task to detect SYSTIME overflow + * @adapter: private adapter struct + * + * this watchdog task periodically reads the timecounter + * in order to prevent missing when the system time registers wrap + * around. This needs to be run approximately twice a minute for the fastest + * overflowing hardware. We run it for all hardware since it shouldn't have a + * large impact. + */ +void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter) +{ + bool timeout = time_is_before_jiffies(adapter->last_overflow_check + + TXGBE_OVERFLOW_PERIOD); + struct timespec64 ts; + + if (timeout) { + txgbe_ptp_gettime64(&adapter->ptp_caps, &ts); + adapter->last_overflow_check = jiffies; + } +} + +/** + * txgbe_ptp_rx_hang - detect error case when Rx timestamp registers latched + * @adapter: private network adapter structure + * + * this watchdog task is scheduled to detect error case where hardware has + * dropped an Rx packet that was timestamped when the ring is full. The + * particular error is rare but leaves the device in a state unable to timestamp + * any future packets. + */ +void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct txgbe_ring *rx_ring; + u32 tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL); + unsigned long rx_event; + int n; + + /* if we don't have a valid timestamp in the registers, just update the + * timeout counter and exit + */ + if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID)) { + adapter->last_rx_ptp_check = jiffies; + return; + } + + /* determine the most recent watchdog or rx_timestamp event */ + rx_event = adapter->last_rx_ptp_check; + for (n = 0; n < adapter->num_rx_queues; n++) { + rx_ring = adapter->rx_ring[n]; + if (time_after(rx_ring->last_rx_timestamp, rx_event)) + rx_event = rx_ring->last_rx_timestamp; + } + + /* only need to read the high RXSTMP register to clear the lock */ + if (time_is_before_jiffies(rx_event + 5*HZ)) { + rd32(hw, TXGBE_PSR_1588_STMPH); + adapter->last_rx_ptp_check = jiffies; + + adapter->rx_hwtstamp_cleared++; + e_warn(drv, "clearing RX Timestamp hang"); + } +} + +/** + * txgbe_ptp_clear_tx_timestamp - utility function to clear Tx timestamp state + * @adapter: the private adapter structure + * + * This function should be called whenever the state related to a Tx timestamp + * needs to be cleared. This helps ensure that all related bits are reset for + * the next Tx timestamp event. + */ +static void txgbe_ptp_clear_tx_timestamp(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + + rd32(hw, TXGBE_TSC_1588_STMPH); + if (adapter->ptp_tx_skb) { + dev_kfree_skb_any(adapter->ptp_tx_skb); + adapter->ptp_tx_skb = NULL; + } + clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state); +} + +/** + * txgbe_ptp_tx_hwtstamp - utility function which checks for TX time stamp + * @adapter: the private adapter struct + * + * if the timestamp is valid, we convert it into the timecounter ns + * value, then store that result into the shhwtstamps structure which + * is passed up the network stack + */ +static void txgbe_ptp_tx_hwtstamp(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + struct skb_shared_hwtstamps shhwtstamps; + u64 regval = 0; + + regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPL); + regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPH) << 32; + + txgbe_ptp_convert_to_hwtstamp(adapter, &shhwtstamps, regval); + skb_tstamp_tx(adapter->ptp_tx_skb, &shhwtstamps); + + txgbe_ptp_clear_tx_timestamp(adapter); +} + +/** + * txgbe_ptp_tx_hwtstamp_work + * @work: pointer to the work struct + * + * This work item polls TSYNCTXCTL valid bit to determine when a Tx hardware + * timestamp has been taken for the current skb. It is necesary, because the + * descriptor's "done" bit does not correlate with the timestamp event. + */ +static void txgbe_ptp_tx_hwtstamp_work(struct work_struct *work) +{ + struct txgbe_adapter *adapter = container_of(work, struct txgbe_adapter, + ptp_tx_work); + struct txgbe_hw *hw = &adapter->hw; + bool timeout = time_is_before_jiffies(adapter->ptp_tx_start + + TXGBE_PTP_TX_TIMEOUT); + u32 tsynctxctl; + + /* we have to have a valid skb to poll for a timestamp */ + if (!adapter->ptp_tx_skb) { + txgbe_ptp_clear_tx_timestamp(adapter); + return; + } + + /* stop polling once we have a valid timestamp */ + tsynctxctl = rd32(hw, TXGBE_TSC_1588_CTL); + if (tsynctxctl & TXGBE_TSC_1588_CTL_VALID) { + txgbe_ptp_tx_hwtstamp(adapter); + return; + } + + /* check timeout last in case timestamp event just occurred */ + if (timeout) { + txgbe_ptp_clear_tx_timestamp(adapter); + adapter->tx_hwtstamp_timeouts++; + e_warn(drv, "clearing Tx Timestamp hang"); + } else { + /* reschedule to keep checking until we timeout */ + schedule_work(&adapter->ptp_tx_work); + } +} + +/** + * txgbe_ptp_rx_rgtstamp - utility function which checks for RX time stamp + * @q_vector: structure containing interrupt and ring information + * @skb: particular skb to send timestamp with + * + * if the timestamp is valid, we convert it into the timecounter ns + * value, then store that result into the shhwtstamps structure which + * is passed up the network stack + */ +void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb) +{ + struct txgbe_hw *hw = &adapter->hw; + u64 regval = 0; + u32 tsyncrxctl; + + /* + * Read the tsyncrxctl register afterwards in order to prevent taking an + * I/O hit on every packet. + */ + tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL); + if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID)) + return; + + regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPL); + regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPH) << 32; + + txgbe_ptp_convert_to_hwtstamp(adapter, skb_hwtstamps(skb), regval); +} + +/** + * txgbe_ptp_get_ts_config - get current hardware timestamping configuration + * @adapter: pointer to adapter structure + * @ifreq: ioctl data + * + * This function returns the current timestamping settings. Rather than + * attempt to deconstruct registers to fill in the values, simply keep a copy + * of the old settings around, and return a copy when requested. + */ +int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr) +{ + struct hwtstamp_config *config = &adapter->tstamp_config; + + return copy_to_user(ifr->ifr_data, config, + sizeof(*config)) ? -EFAULT : 0; +} + +/** + * txgbe_ptp_set_timestamp_mode - setup the hardware for the requested mode + * @adapter: the private txgbe adapter structure + * @config: the hwtstamp configuration requested + * + * Outgoing time stamping can be enabled and disabled. Play nice and + * disable it when requested, although it shouldn't cause any overhead + * when no packet needs it. At most one packet in the queue may be + * marked for time stamping, otherwise it would be impossible to tell + * for sure to which packet the hardware time stamp belongs. + * + * Incoming time stamping has to be configured via the hardware + * filters. Not all combinations are supported, in particular event + * type has to be specified. Matching the kind of event packet is + * not supported, with the exception of "all V2 events regardless of + * level 2 or 4". + * + * Since hardware always timestamps Path delay packets when timestamping V2 + * packets, regardless of the type specified in the register, only use V2 + * Event mode. This more accurately tells the user what the hardware is going + * to do anyways. + * + * Note: this may modify the hwtstamp configuration towards a more general + * mode, if required to support the specifically requested mode. + */ +static int txgbe_ptp_set_timestamp_mode(struct txgbe_adapter *adapter, + struct hwtstamp_config *config) +{ + struct txgbe_hw *hw = &adapter->hw; + u32 tsync_tx_ctl = TXGBE_TSC_1588_CTL_ENABLED; + u32 tsync_rx_ctl = TXGBE_PSR_1588_CTL_ENABLED; + u32 tsync_rx_mtrl = PTP_EV_PORT << 16; + bool is_l2 = false; + u32 regval; + + /* reserved for future extensions */ + if (config->flags) + return -EINVAL; + + switch (config->tx_type) { + case HWTSTAMP_TX_OFF: + tsync_tx_ctl = 0; + case HWTSTAMP_TX_ON: + break; + default: + return -ERANGE; + } + + switch (config->rx_filter) { + case HWTSTAMP_FILTER_NONE: + tsync_rx_ctl = 0; + tsync_rx_mtrl = 0; + adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED | + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER); + break; + case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: + tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1; + tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG; + adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED | + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER); + break; + case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: + tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1; + tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG; + adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED | + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER); + break; + case HWTSTAMP_FILTER_PTP_V2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: + case HWTSTAMP_FILTER_PTP_V2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: + case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: + tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_EVENT_V2; + is_l2 = true; + config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; + adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED | + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER); + break; + case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: + case HWTSTAMP_FILTER_ALL: + default: + /* register RXMTRL must be set in order to do V1 packets, + * therefore it is not possible to time stamp both V1 Sync and + * Delay_Req messages unless hardware supports timestamping all + * packets => return error + */ + adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED | + TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER); + config->rx_filter = HWTSTAMP_FILTER_NONE; + return -ERANGE; + } + + /* define ethertype filter for timestamping L2 packets */ + if (is_l2) + wr32(hw, + TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588), + (TXGBE_PSR_ETYPE_SWC_FILTER_EN | /* enable filter */ + TXGBE_PSR_ETYPE_SWC_1588 | /* enable timestamping */ + ETH_P_1588)); /* 1588 eth protocol type */ + else + wr32(hw, + TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588), + 0); + + /* enable/disable TX */ + regval = rd32(hw, TXGBE_TSC_1588_CTL); + regval &= ~TXGBE_TSC_1588_CTL_ENABLED; + regval |= tsync_tx_ctl; + wr32(hw, TXGBE_TSC_1588_CTL, regval); + + /* enable/disable RX */ + regval = rd32(hw, TXGBE_PSR_1588_CTL); + regval &= ~(TXGBE_PSR_1588_CTL_ENABLED | TXGBE_PSR_1588_CTL_TYPE_MASK); + regval |= tsync_rx_ctl; + wr32(hw, TXGBE_PSR_1588_CTL, regval); + + /* define which PTP packets are time stamped */ + wr32(hw, TXGBE_PSR_1588_MSGTYPE, tsync_rx_mtrl); + + TXGBE_WRITE_FLUSH(hw); + + /* clear TX/RX timestamp state, just to be sure */ + txgbe_ptp_clear_tx_timestamp(adapter); + rd32(hw, TXGBE_PSR_1588_STMPH); + + return 0; +} + +/** + * txgbe_ptp_set_ts_config - user entry point for timestamp mode + * @adapter: pointer to adapter struct + * @ifreq: ioctl data + * + * Set hardware to requested mode. If unsupported, return an error with no + * changes. Otherwise, store the mode for future reference. + */ +int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr) +{ + struct hwtstamp_config config; + int err; + + if (copy_from_user(&config, ifr->ifr_data, sizeof(config))) + return -EFAULT; + + err = txgbe_ptp_set_timestamp_mode(adapter, &config); + if (err) + return err; + + /* save these settings for future reference */ + memcpy(&adapter->tstamp_config, &config, + sizeof(adapter->tstamp_config)); + + return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ? + -EFAULT : 0; +} + +static void txgbe_ptp_link_speed_adjust(struct txgbe_adapter *adapter, + u32 *shift, u32 *incval) +{ + /** + * Scale the NIC cycle counter by a large factor so that + * relatively small corrections to the frequency can be added + * or subtracted. The drawbacks of a large factor include + * (a) the clock register overflows more quickly, (b) the cycle + * counter structure must be able to convert the systime value + * to nanoseconds using only a multiplier and a right-shift, + * and (c) the value must fit within the timinca register space + * => math based on internal DMA clock rate and available bits + * + * Note that when there is no link, internal DMA clock is same as when + * link speed is 10Gb. Set the registers correctly even when link is + * down to preserve the clock setting + */ + switch (adapter->link_speed) { + case TXGBE_LINK_SPEED_10_FULL: + *shift = TXGBE_INCVAL_SHIFT_10; + *incval = TXGBE_INCVAL_10; + break; + case TXGBE_LINK_SPEED_100_FULL: + *shift = TXGBE_INCVAL_SHIFT_100; + *incval = TXGBE_INCVAL_100; + break; + case TXGBE_LINK_SPEED_1GB_FULL: + *shift = TXGBE_INCVAL_SHIFT_FPGA; + *incval = TXGBE_INCVAL_FPGA; + break; + case TXGBE_LINK_SPEED_10GB_FULL: + default: /* TXGBE_LINK_SPEED_10GB_FULL */ + *shift = TXGBE_INCVAL_SHIFT_10GB; + *incval = TXGBE_INCVAL_10GB; + break; + } + + return; +} + +/** + * txgbe_ptp_start_cyclecounter - create the cycle counter from hw + * @adapter: pointer to the adapter structure + * + * This function should be called to set the proper values for the TIMINCA + * register and tell the cyclecounter structure what the tick rate of SYSTIME + * is. It does not directly modify SYSTIME registers or the timecounter + * structure. It should be called whenever a new TIMINCA value is necessary, + * such as during initialization or when the link speed changes. + */ +void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter) +{ + struct txgbe_hw *hw = &adapter->hw; + unsigned long flags; + struct cyclecounter cc; + u32 incval = 0; + + /* For some of the boards below this mask is technically incorrect. + * The timestamp mask overflows at approximately 61bits. However the + * particular hardware does not overflow on an even bitmask value. + * Instead, it overflows due to conversion of upper 32bits billions of + * cycles. Timecounters are not really intended for this purpose so + * they do not properly function if the overflow point isn't 2^N-1. + * However, the actual SYSTIME values in question take ~138 years to + * overflow. In practice this means they won't actually overflow. A + * proper fix to this problem would require modification of the + * timecounter delta calculations. + */ + cc.mask = CLOCKSOURCE_MASK(64); + cc.mult = 1; + cc.shift = 0; + + cc.read = txgbe_ptp_read; + txgbe_ptp_link_speed_adjust(adapter, &cc.shift, &incval); + wr32(hw, TXGBE_TSC_1588_INC, + TXGBE_TSC_1588_INC_IVP(incval, 2)); + + /* update the base incval used to calculate frequency adjustment */ + WRITE_ONCE(adapter->base_incval, incval); + smp_mb(); + + /* need lock to prevent incorrect read while modifying cyclecounter */ + spin_lock_irqsave(&adapter->tmreg_lock, flags); + memcpy(&adapter->hw_cc, &cc, sizeof(adapter->hw_cc)); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); +} + +/** + * txgbe_ptp_reset + * @adapter: the txgbe private board structure + * + * When the MAC resets, all of the hardware configuration for timesync is + * reset. This function should be called to re-enable the device for PTP, + * using the last known settings. However, we do lose the current clock time, + * so we fallback to resetting it based on the kernel's realtime clock. + * + * This function will maintain the hwtstamp_config settings, and it retriggers + * the SDP output if it's enabled. + */ +void txgbe_ptp_reset(struct txgbe_adapter *adapter) +{ + unsigned long flags; + + /* reset the hardware timestamping mode */ + txgbe_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config); + txgbe_ptp_start_cyclecounter(adapter); + + spin_lock_irqsave(&adapter->tmreg_lock, flags); + timecounter_init(&adapter->hw_tc, &adapter->hw_cc, + ktime_to_ns(ktime_get_real())); + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); + + adapter->last_overflow_check = jiffies; +} + +/** + * txgbe_ptp_create_clock + * @adapter: the txgbe private adapter structure + * + * This functino performs setup of the user entry point function table and + * initalizes the PTP clock device used by userspace to access the clock-like + * features of the PTP core. It will be called by txgbe_ptp_init, and may + * re-use a previously initialized clock (such as during a suspend/resume + * cycle). + */ + +static long txgbe_ptp_create_clock(struct txgbe_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + long err; + + /* do nothing if we already have a clock device */ + if (!IS_ERR_OR_NULL(adapter->ptp_clock)) + return 0; + + snprintf(adapter->ptp_caps.name, sizeof(adapter->ptp_caps.name), + "%s", netdev->name); + adapter->ptp_caps.owner = THIS_MODULE; + adapter->ptp_caps.max_adj = 250000000; /* 10^-9s */ + adapter->ptp_caps.n_alarm = 0; + adapter->ptp_caps.n_ext_ts = 0; + adapter->ptp_caps.n_per_out = 0; + adapter->ptp_caps.pps = 0; + adapter->ptp_caps.adjfreq = txgbe_ptp_adjfreq; + adapter->ptp_caps.adjtime = txgbe_ptp_adjtime; + adapter->ptp_caps.gettime64 = txgbe_ptp_gettime64; + adapter->ptp_caps.settime64 = txgbe_ptp_settime64; + adapter->ptp_caps.enable = txgbe_ptp_feature_enable; + + adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps, + pci_dev_to_dev(adapter->pdev)); + if (IS_ERR(adapter->ptp_clock)) { + err = PTR_ERR(adapter->ptp_clock); + adapter->ptp_clock = NULL; + e_dev_err("ptp_clock_register failed\n"); + return err; + } else + e_dev_info("registered PHC device on %s\n", netdev->name); + + /* Set the default timestamp mode to disabled here. We do this in + * create_clock instead of initialization, because we don't want to + * override the previous settings during a suspend/resume cycle. + */ + adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; + adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; + + return 0; +} + +/** + * txgbe_ptp_init + * @adapter: the txgbe private adapter structure + * + * This function performs the required steps for enabling ptp + * support. If ptp support has already been loaded it simply calls the + * cyclecounter init routine and exits. + */ +void txgbe_ptp_init(struct txgbe_adapter *adapter) +{ + /* initialize the spin lock first, since the user might call the clock + * functions any time after we've initialized the ptp clock device. + */ + spin_lock_init(&adapter->tmreg_lock); + + /* obtain a ptp clock device, or re-use an existing device */ + if (txgbe_ptp_create_clock(adapter)) + return; + + /* we have a clock, so we can intialize work for timestamps now */ + INIT_WORK(&adapter->ptp_tx_work, txgbe_ptp_tx_hwtstamp_work); + + /* reset the ptp related hardware bits */ + txgbe_ptp_reset(adapter); + + /* enter the TXGBE_PTP_RUNNING state */ + set_bit(__TXGBE_PTP_RUNNING, &adapter->state); + + return; +} + +/** + * txgbe_ptp_suspend - stop ptp work items + * @adapter: pointer to adapter struct + * + * This function suspends ptp activity, and prevents more work from being + * generated, but does not destroy the clock device. + */ +void txgbe_ptp_suspend(struct txgbe_adapter *adapter) +{ + /* leave the TXGBE_PTP_RUNNING STATE */ + if (!test_and_clear_bit(__TXGBE_PTP_RUNNING, &adapter->state)) + return; + + adapter->flags2 &= ~TXGBE_FLAG2_PTP_PPS_ENABLED; + + cancel_work_sync(&adapter->ptp_tx_work); + txgbe_ptp_clear_tx_timestamp(adapter); +} + +/** + * txgbe_ptp_stop - destroy the ptp_clock device + * @adapter: pointer to adapter struct + * + * Completely destroy the ptp_clock device, and disable all PTP related + * features. Intended to be run when the device is being closed. + */ +void txgbe_ptp_stop(struct txgbe_adapter *adapter) +{ + /* first, suspend ptp activity */ + txgbe_ptp_suspend(adapter); + + /* now destroy the ptp clock device */ + if (adapter->ptp_clock) { + ptp_clock_unregister(adapter->ptp_clock); + adapter->ptp_clock = NULL; + e_dev_info("removed PHC on %s\n", + adapter->netdev->name); + } +} diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_type.h b/drivers/net/ethernet/netswift/txgbe/txgbe_type.h new file mode 100644 index 000000000000..2f62819a848a --- /dev/null +++ b/drivers/net/ethernet/netswift/txgbe/txgbe_type.h @@ -0,0 +1,3213 @@ +/* + * WangXun 10 Gigabit PCI Express Linux driver + * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + * based on ixgbe_type.h, Copyright(c) 1999 - 2017 Intel Corporation. + * Contact Information: + * Linux NICS linux.nics@intel.com + * e1000-devel Mailing List e1000-devel@lists.sourceforge.net + * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497 + */ + + +#ifndef _TXGBE_TYPE_H_ +#define _TXGBE_TYPE_H_ + +#include <linux/types.h> +#include <linux/mdio.h> +#include <linux/netdevice.h> + +/* + * The following is a brief description of the error categories used by the + * ERROR_REPORT* macros. + * + * - TXGBE_ERROR_INVALID_STATE + * This category is for errors which represent a serious failure state that is + * unexpected, and could be potentially harmful to device operation. It should + * not be used for errors relating to issues that can be worked around or + * ignored. + * + * - TXGBE_ERROR_POLLING + * This category is for errors related to polling/timeout issues and should be + * used in any case where the timeout occured, or a failure to obtain a lock, or + * failure to receive data within the time limit. + * + * - TXGBE_ERROR_CAUTION + * This category should be used for reporting issues that may be the cause of + * other errors, such as temperature warnings. It should indicate an event which + * could be serious, but hasn't necessarily caused problems yet. + * + * - TXGBE_ERROR_SOFTWARE + * This category is intended for errors due to software state preventing + * something. The category is not intended for errors due to bad arguments, or + * due to unsupported features. It should be used when a state occurs which + * prevents action but is not a serious issue. + * + * - TXGBE_ERROR_ARGUMENT + * This category is for when a bad or invalid argument is passed. It should be + * used whenever a function is called and error checking has detected the + * argument is wrong or incorrect. + * + * - TXGBE_ERROR_UNSUPPORTED + * This category is for errors which are due to unsupported circumstances or + * configuration issues. It should not be used when the issue is due to an + * invalid argument, but for when something has occurred that is unsupported + * (Ex: Flow control autonegotiation or an unsupported SFP+ module.) + */ + +#include "txgbe_mtd.h" + +/* Little Endian defines */ +#ifndef __le16 +#define __le16 u16 +#endif +#ifndef __le32 +#define __le32 u32 +#endif +#ifndef __le64 +#define __le64 u64 + +#endif +#ifndef __be16 +/* Big Endian defines */ +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 + +#endif + +/************ txgbe_register.h ************/ +/* Vendor ID */ +#ifndef PCI_VENDOR_ID_TRUSTNETIC +#define PCI_VENDOR_ID_TRUSTNETIC 0x8088 +#endif + +/* Device IDs */ +#define TXGBE_DEV_ID_SP1000 0x1001 +#define TXGBE_DEV_ID_WX1820 0x2001 + +/* Subsystem IDs */ +/* SFP */ +#define TXGBE_ID_SP1000_SFP 0x0000 +#define TXGBE_ID_WX1820_SFP 0x2000 +#define TXGBE_ID_SFP 0x00 + +/* copper */ +#define TXGBE_ID_SP1000_XAUI 0x1010 +#define TXGBE_ID_WX1820_XAUI 0x2010 +#define TXGBE_ID_XAUI 0x10 +#define TXGBE_ID_SP1000_SGMII 0x1020 +#define TXGBE_ID_WX1820_SGMII 0x2020 +#define TXGBE_ID_SGMII 0x20 +/* backplane */ +#define TXGBE_ID_SP1000_KR_KX_KX4 0x1030 +#define TXGBE_ID_WX1820_KR_KX_KX4 0x2030 +#define TXGBE_ID_KR_KX_KX4 0x30 +/* MAC Interface */ +#define TXGBE_ID_SP1000_MAC_XAUI 0x1040 +#define TXGBE_ID_WX1820_MAC_XAUI 0x2040 +#define TXGBE_ID_MAC_XAUI 0x40 +#define TXGBE_ID_SP1000_MAC_SGMII 0x1060 +#define TXGBE_ID_WX1820_MAC_SGMII 0x2060 +#define TXGBE_ID_MAC_SGMII 0x60 + +#define TXGBE_NCSI_SUP 0x8000 +#define TXGBE_NCSI_MASK 0x8000 +#define TXGBE_WOL_SUP 0x4000 +#define TXGBE_WOL_MASK 0x4000 + + +/* Combined interface*/ +#define TXGBE_ID_SFI_XAUI 0x50 + +/* Revision ID */ +#define TXGBE_SP_MPW 1 + +/* MDIO Manageable Devices (MMDs). */ +#define TXGBE_MDIO_PMA_PMD_DEV_TYPE 0x1 /* PMA and PMD */ +#define TXGBE_MDIO_PCS_DEV_TYPE 0x3 /* Physical Coding Sublayer*/ +#define TXGBE_MDIO_PHY_XS_DEV_TYPE 0x4 /* PHY Extender Sublayer */ +#define TXGBE_MDIO_AUTO_NEG_DEV_TYPE 0x7 /* Auto-Negotiation */ +#define TXGBE_MDIO_VENDOR_SPECIFIC_1_DEV_TYPE 0x1E /* Vendor specific 1 */ + +/* phy register definitions */ +/* VENDOR_SPECIFIC_1_DEV regs */ +#define TXGBE_MDIO_VENDOR_SPECIFIC_1_STATUS 0x1 /* VS1 Status Reg */ +#define TXGBE_MDIO_VENDOR_SPECIFIC_1_LINK_STATUS 0x0008 /* 1 = Link Up */ +#define TXGBE_MDIO_VENDOR_SPECIFIC_1_SPEED_STATUS 0x0010 /* 0-10G, 1-1G */ + +/* AUTO_NEG_DEV regs */ +#define TXGBE_MDIO_AUTO_NEG_CONTROL 0x0 /* AUTO_NEG Control Reg */ +#define TXGBE_MDIO_AUTO_NEG_ADVT 0x10 /* AUTO_NEG Advt Reg */ +#define TXGBE_MDIO_AUTO_NEG_LP 0x13 /* AUTO_NEG LP Reg */ +#define TXGBE_MDIO_AUTO_NEG_LP_STATUS 0xE820 /* AUTO NEG RX LP Status + * Reg */ +#define TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG 0x20 /* 10G Control Reg */ +#define TXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG 0xC400 /* 1G Provisioning 1 */ +#define TXGBE_MII_AUTONEG_XNP_TX_REG 0x17 /* 1G XNP Transmit */ +#define TXGBE_MII_AUTONEG_ADVERTISE_REG 0x10 /* 100M Advertisement */ + + +#define TXGBE_MDIO_AUTO_NEG_10GBASE_EEE_ADVT 0x8 +#define TXGBE_MDIO_AUTO_NEG_1000BASE_EEE_ADVT 0x4 +#define TXGBE_MDIO_AUTO_NEG_100BASE_EEE_ADVT 0x2 +#define TXGBE_MDIO_AUTO_NEG_LP_1000BASE_CAP 0x8000 +#define TXGBE_MDIO_AUTO_NEG_LP_10GBASE_CAP 0x0800 +#define TXGBE_MDIO_AUTO_NEG_10GBASET_STAT 0x0021 + +#define TXGBE_MII_10GBASE_T_ADVERTISE 0x1000 /* full duplex, bit:12*/ +#define TXGBE_MII_1GBASE_T_ADVERTISE_XNP_TX 0x4000 /* full duplex, bit:14*/ +#define TXGBE_MII_1GBASE_T_ADVERTISE 0x8000 /* full duplex, bit:15*/ +#define TXGBE_MII_100BASE_T_ADVERTISE 0x0100 /* full duplex, bit:8 */ +#define TXGBE_MII_100BASE_T_ADVERTISE_HALF 0x0080 /* half duplex, bit:7 */ +#define TXGBE_MII_RESTART 0x200 +#define TXGBE_MII_AUTONEG_COMPLETE 0x20 +#define TXGBE_MII_AUTONEG_LINK_UP 0x04 +#define TXGBE_MII_AUTONEG_REG 0x0 + +/* PHY_XS_DEV regs */ +#define TXGBE_MDIO_PHY_XS_CONTROL 0x0 /* PHY_XS Control Reg */ +#define TXGBE_MDIO_PHY_XS_RESET 0x8000 /* PHY_XS Reset */ + +/* Media-dependent registers. */ +#define TXGBE_MDIO_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/ +#define TXGBE_MDIO_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/ +#define TXGBE_MDIO_PHY_SPEED_ABILITY 0x4 /* Speed Ability Reg */ +#define TXGBE_MDIO_PHY_EXT_ABILITY 0xB /* Ext Ability Reg */ + +#define TXGBE_MDIO_PHY_SPEED_10G 0x0001 /* 10G capable */ +#define TXGBE_MDIO_PHY_SPEED_1G 0x0010 /* 1G capable */ +#define TXGBE_MDIO_PHY_SPEED_100M 0x0020 /* 100M capable */ +#define TXGBE_MDIO_PHY_SPEED_10M 0x0040 /* 10M capable */ + +#define TXGBE_MDIO_PHY_10GBASET_ABILITY 0x0004 /* 10GBaseT capable */ +#define TXGBE_MDIO_PHY_1000BASET_ABILITY 0x0020 /* 1000BaseT capable */ +#define TXGBE_MDIO_PHY_100BASETX_ABILITY 0x0080 /* 100BaseTX capable */ + +#define TXGBE_PHY_REVISION_MASK 0xFFFFFFF0U +#define TXGBE_MAX_PHY_ADDR 32 + +/* PHY IDs*/ +#define TN1010_PHY_ID 0x00A19410U +#define QT2022_PHY_ID 0x0043A400U +#define ATH_PHY_ID 0x03429050U +/* PHY FW revision */ +#define TNX_FW_REV 0xB +#define AQ_FW_REV 0x20 + +/* ETH PHY Registers */ +#define TXGBE_SR_XS_PCS_MMD_STATUS1 0x30001 +#define TXGBE_SR_PCS_CTL2 0x30007 +#define TXGBE_SR_PMA_MMD_CTL1 0x10000 +#define TXGBE_SR_MII_MMD_CTL 0x1F0000 +#define TXGBE_SR_MII_MMD_DIGI_CTL 0x1F8000 +#define TXGBE_SR_MII_MMD_AN_CTL 0x1F8001 +#define TXGBE_SR_MII_MMD_AN_ADV 0x1F0004 +#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE(_v) ((0x3 & (_v)) << 7) +#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM 0x80 +#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM 0x100 +#define TXGBE_SR_MII_MMD_LP_BABL 0x1F0005 +#define TXGBE_SR_AN_MMD_CTL 0x70000 +#define TXGBE_SR_AN_MMD_ADV_REG1 0x70010 +#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE(_v) ((0x3 & (_v)) << 10) +#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM 0x400 +#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM 0x800 +#define TXGBE_SR_AN_MMD_ADV_REG2 0x70011 +#define TXGBE_SR_AN_MMD_LP_ABL1 0x70013 +#define TXGBE_VR_AN_KR_MODE_CL 0x78003 +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1 0x38000 +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS 0x38010 +#define TXGBE_PHY_MPLLA_CTL0 0x18071 +#define TXGBE_PHY_MPLLA_CTL3 0x18077 +#define TXGBE_PHY_MISC_CTL0 0x18090 +#define TXGBE_PHY_VCO_CAL_LD0 0x18092 +#define TXGBE_PHY_VCO_CAL_LD1 0x18093 +#define TXGBE_PHY_VCO_CAL_LD2 0x18094 +#define TXGBE_PHY_VCO_CAL_LD3 0x18095 +#define TXGBE_PHY_VCO_CAL_REF0 0x18096 +#define TXGBE_PHY_VCO_CAL_REF1 0x18097 +#define TXGBE_PHY_RX_AD_ACK 0x18098 +#define TXGBE_PHY_AFE_DFE_ENABLE 0x1805D +#define TXGBE_PHY_DFE_TAP_CTL0 0x1805E +#define TXGBE_PHY_RX_EQ_ATT_LVL0 0x18057 +#define TXGBE_PHY_RX_EQ_CTL0 0x18058 +#define TXGBE_PHY_RX_EQ_CTL 0x1805C +#define TXGBE_PHY_TX_EQ_CTL0 0x18036 +#define TXGBE_PHY_TX_EQ_CTL1 0x18037 +#define TXGBE_PHY_TX_RATE_CTL 0x18034 +#define TXGBE_PHY_RX_RATE_CTL 0x18054 +#define TXGBE_PHY_TX_GEN_CTL2 0x18032 +#define TXGBE_PHY_RX_GEN_CTL2 0x18052 +#define TXGBE_PHY_RX_GEN_CTL3 0x18053 +#define TXGBE_PHY_MPLLA_CTL2 0x18073 +#define TXGBE_PHY_RX_POWER_ST_CTL 0x18055 +#define TXGBE_PHY_TX_POWER_ST_CTL 0x18035 +#define TXGBE_PHY_TX_GENCTRL1 0x18031 + +#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R 0x0 +#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X 0x1 +#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK 0x3 +#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G 0x0 +#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G 0x2000 +#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK 0x2000 +#define TXGBE_SR_PMA_MMD_CTL1_LB_EN 0x1 +#define TXGBE_SR_MII_MMD_CTL_AN_EN 0x1000 +#define TXGBE_SR_MII_MMD_CTL_RESTART_AN 0x0200 +#define TXGBE_SR_AN_MMD_CTL_RESTART_AN 0x0200 +#define TXGBE_SR_AN_MMD_CTL_ENABLE 0x1000 +#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4 0x40 +#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX 0x20 +#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR 0x80 +#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_MASK 0xFFFF +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_ENABLE 0x1000 +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST 0x8000 +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK 0x1C +#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD 0x10 + +#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX 32 +#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_10GBASER_KR 33 +#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER 40 +#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_MASK 0xFF +#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX 0x56 +#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR 0x7B +#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER 0x56 +#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_MASK 0x7FF +#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 0x1 +#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1 0xE +#define TXGBE_PHY_MISC_CTL0_RX_VREF_CTRL 0x1F00 +#define TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX 1344 +#define TXGBE_PHY_VCO_CAL_LD0_10GBASER_KR 1353 +#define TXGBE_PHY_VCO_CAL_LD0_OTHER 1360 +#define TXGBE_PHY_VCO_CAL_LD0_MASK 0x1000 +#define TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX 42 +#define TXGBE_PHY_VCO_CAL_REF0_LD0_10GBASER_KR 41 +#define TXGBE_PHY_VCO_CAL_REF0_LD0_OTHER 34 +#define TXGBE_PHY_VCO_CAL_REF0_LD0_MASK 0x3F +#define TXGBE_PHY_AFE_DFE_ENABLE_DFE_EN0 0x10 +#define TXGBE_PHY_AFE_DFE_ENABLE_AFE_EN0 0x1 +#define TXGBE_PHY_AFE_DFE_ENABLE_MASK 0xFF +#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT0 0x1 +#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT_MASK 0xF +#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_10GBASER_KR 0x0 +#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_RXAUI 0x1 +#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX 0x3 +#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_OTHER 0x2 +#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_OTHER 0x20 +#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_OTHER 0x200 +#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_OTHER 0x2000 +#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_MASK 0x7 +#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_MASK 0x70 +#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_MASK 0x700 +#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_MASK 0x7000 +#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_10GBASER_KR 0x0 +#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_RXAUI 0x1 +#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX 0x3 +#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_OTHER 0x2 +#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_OTHER 0x20 +#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_OTHER 0x200 +#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_OTHER 0x2000 +#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_MASK 0x7 +#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_MASK 0x70 +#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_MASK 0x700 +#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_MASK 0x7000 +#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR 0x200 +#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR_RXAUI 0x300 +#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER 0x100 +#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_MASK 0x300 +#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_OTHER 0x400 +#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_MASK 0xC00 +#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_OTHER 0x1000 +#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_MASK 0x3000 +#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_OTHER 0x4000 +#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_MASK 0xC000 +#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR 0x200 +#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR_RXAUI 0x300 +#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER 0x100 +#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_MASK 0x300 +#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_OTHER 0x400 +#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_MASK 0xC00 +#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_OTHER 0x1000 +#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_MASK 0x3000 +#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_OTHER 0x4000 +#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_MASK 0xC000 + +#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_8 0x100 +#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10 0x200 +#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_16P5 0x400 +#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_MASK 0x700 + +#define TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME 100 +#define TXGBE_PHY_INIT_DONE_POLLING_TIME 100 + +/**************** Global Registers ****************************/ +/* chip control Registers */ +#define TXGBE_MIS_RST 0x1000C +#define TXGBE_MIS_PWR 0x10000 +#define TXGBE_MIS_CTL 0x10004 +#define TXGBE_MIS_PF_SM 0x10008 +#define TXGBE_MIS_ST 0x10028 +#define TXGBE_MIS_SWSM 0x1002C +#define TXGBE_MIS_RST_ST 0x10030 + +#define TXGBE_MIS_RST_SW_RST 0x00000001U +#define TXGBE_MIS_RST_LAN0_RST 0x00000002U +#define TXGBE_MIS_RST_LAN1_RST 0x00000004U +#define TXGBE_MIS_RST_LAN0_CHG_ETH_MODE 0x20000000U +#define TXGBE_MIS_RST_LAN1_CHG_ETH_MODE 0x40000000U +#define TXGBE_MIS_RST_GLOBAL_RST 0x80000000U +#define TXGBE_MIS_RST_MASK (TXGBE_MIS_RST_SW_RST | \ + TXGBE_MIS_RST_LAN0_RST | \ + TXGBE_MIS_RST_LAN1_RST) +#define TXGBE_MIS_PWR_LAN_ID(_r) ((0xC0000000U & (_r)) >> 30) +#define TXGBE_MIS_PWR_LAN_ID_0 (1) +#define TXGBE_MIS_PWR_LAN_ID_1 (2) +#define TXGBE_MIS_PWR_LAN_ID_A (3) +#define TXGBE_MIS_ST_MNG_INIT_DN 0x00000001U +#define TXGBE_MIS_ST_MNG_VETO 0x00000100U +#define TXGBE_MIS_ST_LAN0_ECC 0x00010000U +#define TXGBE_MIS_ST_LAN1_ECC 0x00020000U +#define TXGBE_MIS_ST_MNG_ECC 0x00040000U +#define TXGBE_MIS_ST_PCORE_ECC 0x00080000U +#define TXGBE_MIS_ST_PCIWRP_ECC 0x00100000U +#define TXGBE_MIS_SWSM_SMBI 1 +#define TXGBE_MIS_RST_ST_DEV_RST_ST_DONE 0x00000000U +#define TXGBE_MIS_RST_ST_DEV_RST_ST_REQ 0x00080000U +#define TXGBE_MIS_RST_ST_DEV_RST_ST_INPROGRESS 0x00100000U +#define TXGBE_MIS_RST_ST_DEV_RST_ST_MASK 0x00180000U +#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK 0x00070000U +#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT 16 +#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST 0x3 +#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST 0x5 +#define TXGBE_MIS_RST_ST_RST_INIT 0x0000FF00U +#define TXGBE_MIS_RST_ST_RST_INI_SHIFT 8 +#define TXGBE_MIS_RST_ST_RST_TIM 0x000000FFU +#define TXGBE_MIS_PF_SM_SM 1 + +/* Sensors for PVT(Process Voltage Temperature) */ +#define TXGBE_TS_CTL 0x10300 +#define TXGBE_TS_EN 0x10304 +#define TXGBE_TS_ST 0x10308 +#define TXGBE_TS_ALARM_THRE 0x1030C +#define TXGBE_TS_DALARM_THRE 0x10310 +#define TXGBE_TS_INT_EN 0x10314 +#define TXGBE_TS_ALARM_ST 0x10318 +#define TXGBE_TS_ALARM_ST_DALARM 0x00000002U +#define TXGBE_TS_ALARM_ST_ALARM 0x00000001U + +#define TXGBE_TS_CTL_EVAL_MD 0x80000000U +#define TXGBE_TS_EN_ENA 0x00000001U +#define TXGBE_TS_ST_DATA_OUT_MASK 0x000003FFU +#define TXGBE_TS_ALARM_THRE_MASK 0x000003FFU +#define TXGBE_TS_DALARM_THRE_MASK 0x000003FFU +#define TXGBE_TS_INT_EN_DALARM_INT_EN 0x00000002U +#define TXGBE_TS_INT_EN_ALARM_INT_EN 0x00000001U + +struct txgbe_thermal_diode_data { + s16 temp; + s16 alarm_thresh; + s16 dalarm_thresh; +}; + +struct txgbe_thermal_sensor_data { + struct txgbe_thermal_diode_data sensor; +}; + + +/* FMGR Registers */ +#define TXGBE_SPI_ILDR_STATUS 0x10120 +#define TXGBE_SPI_ILDR_STATUS_PERST 0x00000001U /* PCIE_PERST is done */ +#define TXGBE_SPI_ILDR_STATUS_PWRRST 0x00000002U /* Power on reset is done */ +#define TXGBE_SPI_ILDR_STATUS_SW_RESET 0x00000080U /* software reset is done */ +#define TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST 0x00000200U /* lan0 soft reset done */ +#define TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST 0x00000400U /* lan1 soft reset done */ + +#define TXGBE_MAX_FLASH_LOAD_POLL_TIME 10 + +#define TXGBE_SPI_CMD 0x10104 +#define TXGBE_SPI_CMD_CMD(_v) (((_v) & 0x7) << 28) +#define TXGBE_SPI_CMD_CLK(_v) (((_v) & 0x7) << 25) +#define TXGBE_SPI_CMD_ADDR(_v) (((_v) & 0xFFFFFF)) +#define TXGBE_SPI_DATA 0x10108 +#define TXGBE_SPI_DATA_BYPASS ((0x1) << 31) +#define TXGBE_SPI_DATA_STATUS(_v) (((_v) & 0xFF) << 16) +#define TXGBE_SPI_DATA_OP_DONE ((0x1)) + +#define TXGBE_SPI_STATUS 0x1010C +#define TXGBE_SPI_STATUS_OPDONE ((0x1)) +#define TXGBE_SPI_STATUS_FLASH_BYPASS ((0x1) << 31) + +#define TXGBE_SPI_USR_CMD 0x10110 +#define TXGBE_SPI_CMDCFG0 0x10114 +#define TXGBE_SPI_CMDCFG1 0x10118 +#define TXGBE_SPI_ECC_CTL 0x10130 +#define TXGBE_SPI_ECC_INJ 0x10134 +#define TXGBE_SPI_ECC_ST 0x10138 +#define TXGBE_SPI_ILDR_SWPTR 0x10124 + +/************************* Port Registers ************************************/ +/* I2C registers */ +#define TXGBE_I2C_CON 0x14900 /* I2C Control */ +#define TXGBE_I2C_CON_SLAVE_DISABLE ((1 << 6)) +#define TXGBE_I2C_CON_RESTART_EN ((1 << 5)) +#define TXGBE_I2C_CON_10BITADDR_MASTER ((1 << 4)) +#define TXGBE_I2C_CON_10BITADDR_SLAVE ((1 << 3)) +#define TXGBE_I2C_CON_SPEED(_v) (((_v) & 0x3) << 1) +#define TXGBE_I2C_CON_MASTER_MODE ((1 << 0)) +#define TXGBE_I2C_TAR 0x14904 /* I2C Target Address */ +#define TXGBE_I2C_DATA_CMD 0x14910 /* I2C Rx/Tx Data Buf and Cmd */ +#define TXGBE_I2C_DATA_CMD_STOP ((1 << 9)) +#define TXGBE_I2C_DATA_CMD_READ ((1 << 8) | TXGBE_I2C_DATA_CMD_STOP) +#define TXGBE_I2C_DATA_CMD_WRITE ((0 << 8) | TXGBE_I2C_DATA_CMD_STOP) +#define TXGBE_I2C_SS_SCL_HCNT 0x14914 /* Standard speed I2C Clock SCL + * High Count */ +#define TXGBE_I2C_SS_SCL_LCNT 0x14918 /* Standard speed I2C Clock SCL + * Low Count */ +#define TXGBE_I2C_FS_SCL_HCNT 0x1491C /* Fast Mode and Fast Mode Plus + * I2C Clock SCL High Count */ +#define TXGBE_I2C_FS_SCL_LCNT 0x14920 /* Fast Mode and Fast Mode Plus + * I2C Clock SCL Low Count */ +#define TXGBE_I2C_HS_SCL_HCNT 0x14924 /* High speed I2C Clock SCL + * High Count */ +#define TXGBE_I2C_HS_SCL_LCNT 0x14928 /* High speed I2C Clock SCL Low + * Count */ +#define TXGBE_I2C_INTR_STAT 0x1492C /* I2C Interrupt Status */ +#define TXGBE_I2C_RAW_INTR_STAT 0x14934 /* I2C Raw Interrupt Status */ +#define TXGBE_I2C_INTR_STAT_RX_FULL ((0x1) << 2) +#define TXGBE_I2C_INTR_STAT_TX_EMPTY ((0x1) << 4) +#define TXGBE_I2C_INTR_MASK 0x14930 /* I2C Interrupt Mask */ +#define TXGBE_I2C_RX_TL 0x14938 /* I2C Receive FIFO Threshold */ +#define TXGBE_I2C_TX_TL 0x1493C /* I2C TX FIFO Threshold */ +#define TXGBE_I2C_CLR_INTR 0x14940 /* Clear Combined and Individual + * Int */ +#define TXGBE_I2C_CLR_RX_UNDER 0x14944 /* Clear RX_UNDER Interrupt */ +#define TXGBE_I2C_CLR_RX_OVER 0x14948 /* Clear RX_OVER Interrupt */ +#define TXGBE_I2C_CLR_TX_OVER 0x1494C /* Clear TX_OVER Interrupt */ +#define TXGBE_I2C_CLR_RD_REQ 0x14950 /* Clear RD_REQ Interrupt */ +#define TXGBE_I2C_CLR_TX_ABRT 0x14954 /* Clear TX_ABRT Interrupt */ +#define TXGBE_I2C_CLR_RX_DONE 0x14958 /* Clear RX_DONE Interrupt */ +#define TXGBE_I2C_CLR_ACTIVITY 0x1495C /* Clear ACTIVITY Interrupt */ +#define TXGBE_I2C_CLR_STOP_DET 0x14960 /* Clear STOP_DET Interrupt */ +#define TXGBE_I2C_CLR_START_DET 0x14964 /* Clear START_DET Interrupt */ +#define TXGBE_I2C_CLR_GEN_CALL 0x14968 /* Clear GEN_CALL Interrupt */ +#define TXGBE_I2C_ENABLE 0x1496C /* I2C Enable */ +#define TXGBE_I2C_STATUS 0x14970 /* I2C Status register */ +#define TXGBE_I2C_STATUS_MST_ACTIVITY ((1U << 5)) +#define TXGBE_I2C_TXFLR 0x14974 /* Transmit FIFO Level Reg */ +#define TXGBE_I2C_RXFLR 0x14978 /* Receive FIFO Level Reg */ +#define TXGBE_I2C_SDA_HOLD 0x1497C /* SDA hold time length reg */ +#define TXGBE_I2C_TX_ABRT_SOURCE 0x14980 /* I2C TX Abort Status Reg */ +#define TXGBE_I2C_SDA_SETUP 0x14994 /* I2C SDA Setup Register */ +#define TXGBE_I2C_ENABLE_STATUS 0x1499C /* I2C Enable Status Register */ +#define TXGBE_I2C_FS_SPKLEN 0x149A0 /* ISS and FS spike suppression + * limit */ +#define TXGBE_I2C_HS_SPKLEN 0x149A4 /* HS spike suppression limit */ +#define TXGBE_I2C_SCL_STUCK_TIMEOUT 0x149AC /* I2C SCL stuck at low timeout + * register */ +#define TXGBE_I2C_SDA_STUCK_TIMEOUT 0x149B0 /*I2C SDA Stuck at Low Timeout*/ +#define TXGBE_I2C_CLR_SCL_STUCK_DET 0x149B4 /* Clear SCL Stuck at Low Detect + * Interrupt */ +#define TXGBE_I2C_DEVICE_ID 0x149b8 /* I2C Device ID */ +#define TXGBE_I2C_COMP_PARAM_1 0x149f4 /* Component Parameter Reg */ +#define TXGBE_I2C_COMP_VERSION 0x149f8 /* Component Version ID */ +#define TXGBE_I2C_COMP_TYPE 0x149fc /* DesignWare Component Type + * Reg */ + +#define TXGBE_I2C_SLAVE_ADDR (0xA0 >> 1) +#define TXGBE_I2C_THERMAL_SENSOR_ADDR 0xF8 + + +/* port cfg Registers */ +#define TXGBE_CFG_PORT_CTL 0x14400 +#define TXGBE_CFG_PORT_ST 0x14404 +#define TXGBE_CFG_EX_VTYPE 0x14408 +#define TXGBE_CFG_LED_CTL 0x14424 +#define TXGBE_CFG_VXLAN 0x14410 +#define TXGBE_CFG_VXLAN_GPE 0x14414 +#define TXGBE_CFG_GENEVE 0x14418 +#define TXGBE_CFG_TEREDO 0x1441C +#define TXGBE_CFG_TCP_TIME 0x14420 +#define TXGBE_CFG_TAG_TPID(_i) (0x14430 + ((_i) * 4)) +/* port cfg bit */ +#define TXGBE_CFG_PORT_CTL_PFRSTD 0x00004000U /* Phy Function Reset Done */ +#define TXGBE_CFG_PORT_CTL_D_VLAN 0x00000001U /* double vlan*/ +#define TXGBE_CFG_PORT_CTL_ETAG_ETYPE_VLD 0x00000002U +#define TXGBE_CFG_PORT_CTL_QINQ 0x00000004U +#define TXGBE_CFG_PORT_CTL_DRV_LOAD 0x00000008U +#define TXGBE_CFG_PORT_CTL_FORCE_LKUP 0x00000010U /* force link up */ +#define TXGBE_CFG_PORT_CTL_DCB_EN 0x00000400U /* dcb enabled */ +#define TXGBE_CFG_PORT_CTL_NUM_TC_MASK 0x00000800U /* number of TCs */ +#define TXGBE_CFG_PORT_CTL_NUM_TC_4 0x00000000U +#define TXGBE_CFG_PORT_CTL_NUM_TC_8 0x00000800U +#define TXGBE_CFG_PORT_CTL_NUM_VT_MASK 0x00003000U /* number of TVs */ +#define TXGBE_CFG_PORT_CTL_NUM_VT_NONE 0x00000000U +#define TXGBE_CFG_PORT_CTL_NUM_VT_16 0x00001000U +#define TXGBE_CFG_PORT_CTL_NUM_VT_32 0x00002000U +#define TXGBE_CFG_PORT_CTL_NUM_VT_64 0x00003000U +/* Status Bit */ +#define TXGBE_CFG_PORT_ST_LINK_UP 0x00000001U +#define TXGBE_CFG_PORT_ST_LINK_10G 0x00000002U +#define TXGBE_CFG_PORT_ST_LINK_1G 0x00000004U +#define TXGBE_CFG_PORT_ST_LINK_100M 0x00000008U +#define TXGBE_CFG_PORT_ST_LAN_ID(_r) ((0x00000100U & (_r)) >> 8) +#define TXGBE_LINK_UP_TIME 90 +/* LED CTL Bit */ +#define TXGBE_CFG_LED_CTL_LINK_BSY_SEL 0x00000010U +#define TXGBE_CFG_LED_CTL_LINK_100M_SEL 0x00000008U +#define TXGBE_CFG_LED_CTL_LINK_1G_SEL 0x00000004U +#define TXGBE_CFG_LED_CTL_LINK_10G_SEL 0x00000002U +#define TXGBE_CFG_LED_CTL_LINK_UP_SEL 0x00000001U +#define TXGBE_CFG_LED_CTL_LINK_OD_SHIFT 16 +/* LED modes */ +#define TXGBE_LED_LINK_UP TXGBE_CFG_LED_CTL_LINK_UP_SEL +#define TXGBE_LED_LINK_10G TXGBE_CFG_LED_CTL_LINK_10G_SEL +#define TXGBE_LED_LINK_ACTIVE TXGBE_CFG_LED_CTL_LINK_BSY_SEL +#define TXGBE_LED_LINK_1G TXGBE_CFG_LED_CTL_LINK_1G_SEL +#define TXGBE_LED_LINK_100M TXGBE_CFG_LED_CTL_LINK_100M_SEL + +/* GPIO Registers */ +#define TXGBE_GPIO_DR 0x14800 +#define TXGBE_GPIO_DDR 0x14804 +#define TXGBE_GPIO_CTL 0x14808 +#define TXGBE_GPIO_INTEN 0x14830 +#define TXGBE_GPIO_INTMASK 0x14834 +#define TXGBE_GPIO_INTTYPE_LEVEL 0x14838 +#define TXGBE_GPIO_INTSTATUS 0x14844 +#define TXGBE_GPIO_EOI 0x1484C +/*GPIO bit */ +#define TXGBE_GPIO_DR_0 0x00000001U /* SDP0 Data Value */ +#define TXGBE_GPIO_DR_1 0x00000002U /* SDP1 Data Value */ +#define TXGBE_GPIO_DR_2 0x00000004U /* SDP2 Data Value */ +#define TXGBE_GPIO_DR_3 0x00000008U /* SDP3 Data Value */ +#define TXGBE_GPIO_DR_4 0x00000010U /* SDP4 Data Value */ +#define TXGBE_GPIO_DR_5 0x00000020U /* SDP5 Data Value */ +#define TXGBE_GPIO_DR_6 0x00000040U /* SDP6 Data Value */ +#define TXGBE_GPIO_DR_7 0x00000080U /* SDP7 Data Value */ +#define TXGBE_GPIO_DDR_0 0x00000001U /* SDP0 IO direction */ +#define TXGBE_GPIO_DDR_1 0x00000002U /* SDP1 IO direction */ +#define TXGBE_GPIO_DDR_2 0x00000004U /* SDP1 IO direction */ +#define TXGBE_GPIO_DDR_3 0x00000008U /* SDP3 IO direction */ +#define TXGBE_GPIO_DDR_4 0x00000010U /* SDP4 IO direction */ +#define TXGBE_GPIO_DDR_5 0x00000020U /* SDP5 IO direction */ +#define TXGBE_GPIO_DDR_6 0x00000040U /* SDP6 IO direction */ +#define TXGBE_GPIO_DDR_7 0x00000080U /* SDP7 IO direction */ +#define TXGBE_GPIO_CTL_SW_MODE 0x00000000U /* SDP software mode */ +#define TXGBE_GPIO_INTEN_1 0x00000002U /* SDP1 interrupt enable */ +#define TXGBE_GPIO_INTEN_2 0x00000004U /* SDP2 interrupt enable */ +#define TXGBE_GPIO_INTEN_3 0x00000008U /* SDP3 interrupt enable */ +#define TXGBE_GPIO_INTEN_5 0x00000020U /* SDP5 interrupt enable */ +#define TXGBE_GPIO_INTEN_6 0x00000040U /* SDP6 interrupt enable */ +#define TXGBE_GPIO_INTTYPE_LEVEL_2 0x00000004U /* SDP2 interrupt type level */ +#define TXGBE_GPIO_INTTYPE_LEVEL_3 0x00000008U /* SDP3 interrupt type level */ +#define TXGBE_GPIO_INTTYPE_LEVEL_5 0x00000020U /* SDP5 interrupt type level */ +#define TXGBE_GPIO_INTTYPE_LEVEL_6 0x00000040U /* SDP6 interrupt type level */ +#define TXGBE_GPIO_INTSTATUS_1 0x00000002U /* SDP1 interrupt status */ +#define TXGBE_GPIO_INTSTATUS_2 0x00000004U /* SDP2 interrupt status */ +#define TXGBE_GPIO_INTSTATUS_3 0x00000008U /* SDP3 interrupt status */ +#define TXGBE_GPIO_INTSTATUS_5 0x00000020U /* SDP5 interrupt status */ +#define TXGBE_GPIO_INTSTATUS_6 0x00000040U /* SDP6 interrupt status */ +#define TXGBE_GPIO_EOI_2 0x00000004U /* SDP2 interrupt clear */ +#define TXGBE_GPIO_EOI_3 0x00000008U /* SDP3 interrupt clear */ +#define TXGBE_GPIO_EOI_5 0x00000020U /* SDP5 interrupt clear */ +#define TXGBE_GPIO_EOI_6 0x00000040U /* SDP6 interrupt clear */ + +/* TPH registers */ +#define TXGBE_CFG_TPH_TDESC 0x14F00 /* TPH conf for Tx desc write back */ +#define TXGBE_CFG_TPH_RDESC 0x14F04 /* TPH conf for Rx desc write back */ +#define TXGBE_CFG_TPH_RHDR 0x14F08 /* TPH conf for writing Rx pkt header */ +#define TXGBE_CFG_TPH_RPL 0x14F0C /* TPH conf for payload write access */ +/* TPH bit */ +#define TXGBE_CFG_TPH_TDESC_EN 0x80000000U +#define TXGBE_CFG_TPH_TDESC_PH_SHIFT 29 +#define TXGBE_CFG_TPH_TDESC_ST_SHIFT 16 +#define TXGBE_CFG_TPH_RDESC_EN 0x80000000U +#define TXGBE_CFG_TPH_RDESC_PH_SHIFT 29 +#define TXGBE_CFG_TPH_RDESC_ST_SHIFT 16 +#define TXGBE_CFG_TPH_RHDR_EN 0x00008000U +#define TXGBE_CFG_TPH_RHDR_PH_SHIFT 13 +#define TXGBE_CFG_TPH_RHDR_ST_SHIFT 0 +#define TXGBE_CFG_TPH_RPL_EN 0x80000000U +#define TXGBE_CFG_TPH_RPL_PH_SHIFT 29 +#define TXGBE_CFG_TPH_RPL_ST_SHIFT 16 + +/*********************** Transmit DMA registers **************************/ +/* transmit global control */ +#define TXGBE_TDM_CTL 0x18000 +#define TXGBE_TDM_VF_TE(_i) (0x18004 + ((_i) * 4)) +#define TXGBE_TDM_PB_THRE(_i) (0x18020 + ((_i) * 4)) /* 8 of these 0 - 7 */ +#define TXGBE_TDM_LLQ(_i) (0x18040 + ((_i) * 4)) /* 4 of these (0-3) */ +#define TXGBE_TDM_ETYPE_LB_L 0x18050 +#define TXGBE_TDM_ETYPE_LB_H 0x18054 +#define TXGBE_TDM_ETYPE_AS_L 0x18058 +#define TXGBE_TDM_ETYPE_AS_H 0x1805C +#define TXGBE_TDM_MAC_AS_L 0x18060 +#define TXGBE_TDM_MAC_AS_H 0x18064 +#define TXGBE_TDM_VLAN_AS_L 0x18070 +#define TXGBE_TDM_VLAN_AS_H 0x18074 +#define TXGBE_TDM_TCP_FLG_L 0x18078 +#define TXGBE_TDM_TCP_FLG_H 0x1807C +#define TXGBE_TDM_VLAN_INS(_i) (0x18100 + ((_i) * 4)) /* 64 of these 0 - 63 */ +/* TDM CTL BIT */ +#define TXGBE_TDM_CTL_TE 0x1 /* Transmit Enable */ +#define TXGBE_TDM_CTL_PADDING 0x2 /* Padding byte number for ipsec ESP */ +#define TXGBE_TDM_CTL_VT_SHIFT 16 /* VLAN EtherType */ +/* Per VF Port VLAN insertion rules */ +#define TXGBE_TDM_VLAN_INS_VLANA_DEFAULT 0x40000000U /*Always use default VLAN*/ +#define TXGBE_TDM_VLAN_INS_VLANA_NEVER 0x80000000U /* Never insert VLAN tag */ + +#define TXGBE_TDM_RP_CTL 0x18400 +#define TXGBE_TDM_RP_CTL_RST ((0x1) << 0) +#define TXGBE_TDM_RP_CTL_RPEN ((0x1) << 2) +#define TXGBE_TDM_RP_CTL_RLEN ((0x1) << 3) +#define TXGBE_TDM_RP_IDX 0x1820C +#define TXGBE_TDM_RP_RATE 0x18404 +#define TXGBE_TDM_RP_RATE_MIN(v) ((0x3FFF & (v))) +#define TXGBE_TDM_RP_RATE_MAX(v) ((0x3FFF & (v)) << 16) + +/* qos */ +#define TXGBE_TDM_PBWARB_CTL 0x18200 +#define TXGBE_TDM_PBWARB_CFG(_i) (0x18220 + ((_i) * 4)) /* 8 of these (0-7) */ +#define TXGBE_TDM_MMW 0x18208 +#define TXGBE_TDM_VM_CREDIT(_i) (0x18500 + ((_i) * 4)) +#define TXGBE_TDM_VM_CREDIT_VAL(v) (0x3FF & (v)) +/* fcoe */ +#define TXGBE_TDM_FC_EOF 0x18384 +#define TXGBE_TDM_FC_SOF 0x18380 +/* etag */ +#define TXGBE_TDM_ETAG_INS(_i) (0x18700 + ((_i) * 4)) /* 64 of these 0 - 63 */ +/* statistic */ +#define TXGBE_TDM_SEC_DRP 0x18304 +#define TXGBE_TDM_PKT_CNT 0x18308 +#define TXGBE_TDM_OS2BMC_CNT 0x18314 + +/**************************** Receive DMA registers **************************/ +/* receive control */ +#define TXGBE_RDM_ARB_CTL 0x12000 +#define TXGBE_RDM_VF_RE(_i) (0x12004 + ((_i) * 4)) +#define TXGBE_RDM_RSC_CTL 0x1200C +#define TXGBE_RDM_ARB_CFG(_i) (0x12040 + ((_i) * 4)) /* 8 of these (0-7) */ +#define TXGBE_RDM_PF_QDE(_i) (0x12080 + ((_i) * 4)) +#define TXGBE_RDM_PF_HIDE(_i) (0x12090 + ((_i) * 4)) +/* VFRE bitmask */ +#define TXGBE_RDM_VF_RE_ENABLE_ALL 0xFFFFFFFFU + +/* FCoE DMA Context Registers */ +#define TXGBE_RDM_FCPTRL 0x12410 +#define TXGBE_RDM_FCPTRH 0x12414 +#define TXGBE_RDM_FCBUF 0x12418 +#define TXGBE_RDM_FCBUF_VALID ((0x1)) /* DMA Context Valid */ +#define TXGBE_RDM_FCBUF_SIZE(_v) (((_v) & 0x3) << 3) /* User Buffer Size */ +#define TXGBE_RDM_FCBUF_COUNT(_v) (((_v) & 0xFF) << 8) /* Num of User Buf */ +#define TXGBE_RDM_FCBUF_OFFSET(_v) (((_v) & 0xFFFF) << 16) /* User Buf Offset*/ +#define TXGBE_RDM_FCRW 0x12420 +#define TXGBE_RDM_FCRW_FCSEL(_v) (((_v) & 0x1FF)) /* FC X_ID: 11 bits */ +#define TXGBE_RDM_FCRW_WE ((0x1) << 14) /* Write enable */ +#define TXGBE_RDM_FCRW_RE ((0x1) << 15) /* Read enable */ +#define TXGBE_RDM_FCRW_LASTSIZE(_v) (((_v) & 0xFFFF) << 16) + +/* statistic */ +#define TXGBE_RDM_DRP_PKT 0x12500 +#define TXGBE_RDM_BMC2OS_CNT 0x12510 + +/***************************** RDB registers *********************************/ +/* Flow Control Registers */ +#define TXGBE_RDB_RFCV(_i) (0x19200 + ((_i) * 4)) /* 4 of these (0-3)*/ +#define TXGBE_RDB_RFCL(_i) (0x19220 + ((_i) * 4)) /* 8 of these (0-7)*/ +#define TXGBE_RDB_RFCH(_i) (0x19260 + ((_i) * 4)) /* 8 of these (0-7)*/ +#define TXGBE_RDB_RFCRT 0x192A0 +#define TXGBE_RDB_RFCC 0x192A4 +/* receive packet buffer */ +#define TXGBE_RDB_PB_WRAP 0x19004 +#define TXGBE_RDB_PB_SZ(_i) (0x19020 + ((_i) * 4)) +#define TXGBE_RDB_PB_CTL 0x19000 +#define TXGBE_RDB_UP2TC 0x19008 +#define TXGBE_RDB_PB_SZ_SHIFT 10 +#define TXGBE_RDB_PB_SZ_MASK 0x000FFC00U +/* lli interrupt */ +#define TXGBE_RDB_LLI_THRE 0x19080 +#define TXGBE_RDB_LLI_THRE_SZ(_v) ((0xFFF & (_v))) +#define TXGBE_RDB_LLI_THRE_UP(_v) ((0x7 & (_v)) << 16) +#define TXGBE_RDB_LLI_THRE_UP_SHIFT 16 + +/* ring assignment */ +#define TXGBE_RDB_PL_CFG(_i) (0x19300 + ((_i) * 4)) +#define TXGBE_RDB_RSSTBL(_i) (0x19400 + ((_i) * 4)) +#define TXGBE_RDB_RSSRK(_i) (0x19480 + ((_i) * 4)) +#define TXGBE_RDB_RSS_TC 0x194F0 +#define TXGBE_RDB_RA_CTL 0x194F4 +#define TXGBE_RDB_5T_SA(_i) (0x19600 + ((_i) * 4)) /* Src Addr Q Filter */ +#define TXGBE_RDB_5T_DA(_i) (0x19800 + ((_i) * 4)) /* Dst Addr Q Filter */ +#define TXGBE_RDB_5T_SDP(_i) (0x19A00 + ((_i) * 4)) /*Src Dst Addr Q Filter*/ +#define TXGBE_RDB_5T_CTL0(_i) (0x19C00 + ((_i) * 4)) /* Five Tuple Q Filter */ +#define TXGBE_RDB_ETYPE_CLS(_i) (0x19100 + ((_i) * 4)) /* EType Q Select */ +#define TXGBE_RDB_SYN_CLS 0x19130 +#define TXGBE_RDB_5T_CTL1(_i) (0x19E00 + ((_i) * 4)) /*128 of these (0-127)*/ +/* Flow Director registers */ +#define TXGBE_RDB_FDIR_CTL 0x19500 +#define TXGBE_RDB_FDIR_HKEY 0x19568 +#define TXGBE_RDB_FDIR_SKEY 0x1956C +#define TXGBE_RDB_FDIR_DA4_MSK 0x1953C +#define TXGBE_RDB_FDIR_SA4_MSK 0x19540 +#define TXGBE_RDB_FDIR_TCP_MSK 0x19544 +#define TXGBE_RDB_FDIR_UDP_MSK 0x19548 +#define TXGBE_RDB_FDIR_SCTP_MSK 0x19560 +#define TXGBE_RDB_FDIR_IP6_MSK 0x19574 +#define TXGBE_RDB_FDIR_OTHER_MSK 0x19570 +#define TXGBE_RDB_FDIR_FLEX_CFG(_i) (0x19580 + ((_i) * 4)) +/* Flow Director Stats registers */ +#define TXGBE_RDB_FDIR_FREE 0x19538 +#define TXGBE_RDB_FDIR_LEN 0x1954C +#define TXGBE_RDB_FDIR_USE_ST 0x19550 +#define TXGBE_RDB_FDIR_FAIL_ST 0x19554 +#define TXGBE_RDB_FDIR_MATCH 0x19558 +#define TXGBE_RDB_FDIR_MISS 0x1955C +/* Flow Director Programming registers */ +#define TXGBE_RDB_FDIR_IP6(_i) (0x1950C + ((_i) * 4)) /* 3 of these (0-2)*/ +#define TXGBE_RDB_FDIR_SA 0x19518 +#define TXGBE_RDB_FDIR_DA 0x1951C +#define TXGBE_RDB_FDIR_PORT 0x19520 +#define TXGBE_RDB_FDIR_FLEX 0x19524 +#define TXGBE_RDB_FDIR_HASH 0x19528 +#define TXGBE_RDB_FDIR_CMD 0x1952C +/* VM RSS */ +#define TXGBE_RDB_VMRSSRK(_i, _p) (0x1A000 + ((_i) * 4) + ((_p) * 0x40)) +#define TXGBE_RDB_VMRSSTBL(_i, _p) (0x1B000 + ((_i) * 4) + ((_p) * 0x40)) +/* FCoE Redirection */ +#define TXGBE_RDB_FCRE_TBL_SIZE (8) /* Max entries in FCRETA */ +#define TXGBE_RDB_FCRE_CTL 0x19140 +#define TXGBE_RDB_FCRE_CTL_ENA ((0x1)) /* FCoE Redir Table Enable */ +#define TXGBE_RDB_FCRE_TBL(_i) (0x19160 + ((_i) * 4)) +#define TXGBE_RDB_FCRE_TBL_RING(_v) (((_v) & 0x7F)) /* output queue number */ +/* statistic */ +#define TXGBE_RDB_MPCNT(_i) (0x19040 + ((_i) * 4)) /* 8 of 3FA0-3FBC*/ +#define TXGBE_RDB_LXONTXC 0x1921C +#define TXGBE_RDB_LXOFFTXC 0x19218 +#define TXGBE_RDB_PXON2OFFCNT(_i) (0x19280 + ((_i) * 4)) /* 8 of these */ +#define TXGBE_RDB_PXONTXC(_i) (0x192E0 + ((_i) * 4)) /* 8 of 3F00-3F1C*/ +#define TXGBE_RDB_PXOFFTXC(_i) (0x192C0 + ((_i) * 4)) /* 8 of 3F20-3F3C*/ +#define TXGBE_RDB_PFCMACDAL 0x19210 +#define TXGBE_RDB_PFCMACDAH 0x19214 +#define TXGBE_RDB_TXSWERR 0x1906C +#define TXGBE_RDB_TXSWERR_TB_FREE 0x3FF +/* rdb_pl_cfg reg mask */ +#define TXGBE_RDB_PL_CFG_L4HDR 0x2 +#define TXGBE_RDB_PL_CFG_L3HDR 0x4 +#define TXGBE_RDB_PL_CFG_L2HDR 0x8 +#define TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR 0x20 +#define TXGBE_RDB_PL_CFG_TUN_TUNHDR 0x10 +#define TXGBE_RDB_PL_CFG_RSS_PL_MASK 0x7 +#define TXGBE_RDB_PL_CFG_RSS_PL_SHIFT 29 +/* RQTC Bit Masks and Shifts */ +#define TXGBE_RDB_RSS_TC_SHIFT_TC(_i) ((_i) * 4) +#define TXGBE_RDB_RSS_TC_TC0_MASK (0x7 << 0) +#define TXGBE_RDB_RSS_TC_TC1_MASK (0x7 << 4) +#define TXGBE_RDB_RSS_TC_TC2_MASK (0x7 << 8) +#define TXGBE_RDB_RSS_TC_TC3_MASK (0x7 << 12) +#define TXGBE_RDB_RSS_TC_TC4_MASK (0x7 << 16) +#define TXGBE_RDB_RSS_TC_TC5_MASK (0x7 << 20) +#define TXGBE_RDB_RSS_TC_TC6_MASK (0x7 << 24) +#define TXGBE_RDB_RSS_TC_TC7_MASK (0x7 << 28) +/* Packet Buffer Initialization */ +#define TXGBE_MAX_PACKET_BUFFERS 8 +#define TXGBE_RDB_PB_SZ_48KB 0x00000030U /* 48KB Packet Buffer */ +#define TXGBE_RDB_PB_SZ_64KB 0x00000040U /* 64KB Packet Buffer */ +#define TXGBE_RDB_PB_SZ_80KB 0x00000050U /* 80KB Packet Buffer */ +#define TXGBE_RDB_PB_SZ_128KB 0x00000080U /* 128KB Packet Buffer */ +#define TXGBE_RDB_PB_SZ_MAX 0x00000200U /* 512KB Packet Buffer */ + + +/* Packet buffer allocation strategies */ +enum { + PBA_STRATEGY_EQUAL = 0, /* Distribute PB space equally */ +#define PBA_STRATEGY_EQUAL PBA_STRATEGY_EQUAL + PBA_STRATEGY_WEIGHTED = 1, /* Weight front half of TCs */ +#define PBA_STRATEGY_WEIGHTED PBA_STRATEGY_WEIGHTED +}; + + +/* FCRTL Bit Masks */ +#define TXGBE_RDB_RFCL_XONE 0x80000000U /* XON enable */ +#define TXGBE_RDB_RFCH_XOFFE 0x80000000U /* Packet buffer fc enable */ +/* FCCFG Bit Masks */ +#define TXGBE_RDB_RFCC_RFCE_802_3X 0x00000008U /* Tx link FC enable */ +#define TXGBE_RDB_RFCC_RFCE_PRIORITY 0x00000010U /* Tx priority FC enable */ + +/* Immediate Interrupt Rx (A.K.A. Low Latency Interrupt) */ +#define TXGBE_RDB_5T_CTL1_SIZE_BP 0x00001000U /* Packet size bypass */ +#define TXGBE_RDB_5T_CTL1_LLI 0x00100000U /* Enables low latency Int */ +#define TXGBE_RDB_LLI_THRE_PRIORITY_MASK 0x00070000U /* VLAN priority mask */ +#define TXGBE_RDB_LLI_THRE_PRIORITY_EN 0x00080000U /* VLAN priority enable */ +#define TXGBE_RDB_LLI_THRE_CMN_EN 0x00100000U /* cmn packet receiveed */ + +#define TXGBE_MAX_RDB_5T_CTL0_FILTERS 128 +#define TXGBE_RDB_5T_CTL0_PROTOCOL_MASK 0x00000003U +#define TXGBE_RDB_5T_CTL0_PROTOCOL_TCP 0x00000000U +#define TXGBE_RDB_5T_CTL0_PROTOCOL_UDP 0x00000001U +#define TXGBE_RDB_5T_CTL0_PROTOCOL_SCTP 2 +#define TXGBE_RDB_5T_CTL0_PRIORITY_MASK 0x00000007U +#define TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT 2 +#define TXGBE_RDB_5T_CTL0_POOL_MASK 0x0000003FU +#define TXGBE_RDB_5T_CTL0_POOL_SHIFT 8 +#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_MASK 0x0000001FU +#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT 25 +#define TXGBE_RDB_5T_CTL0_SOURCE_ADDR_MASK 0x1E +#define TXGBE_RDB_5T_CTL0_DEST_ADDR_MASK 0x1D +#define TXGBE_RDB_5T_CTL0_SOURCE_PORT_MASK 0x1B +#define TXGBE_RDB_5T_CTL0_DEST_PORT_MASK 0x17 +#define TXGBE_RDB_5T_CTL0_PROTOCOL_COMP_MASK 0x0F +#define TXGBE_RDB_5T_CTL0_POOL_MASK_EN 0x40000000U +#define TXGBE_RDB_5T_CTL0_QUEUE_ENABLE 0x80000000U + +#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE 0x007F0000U /* bits 22:16 */ +#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE_SHIFT 16 +#define TXGBE_RDB_ETYPE_CLS_LLI 0x20000000U /* bit 29 */ +#define TXGBE_RDB_ETYPE_CLS_QUEUE_EN 0x80000000U /* bit 31 */ + +/* Receive Config masks */ +#define TXGBE_RDB_PB_CTL_RXEN (0x80000000) /* Enable Receiver */ +#define TXGBE_RDB_PB_CTL_DISABLED 0x1 + +#define TXGBE_RDB_RA_CTL_RSS_EN 0x00000004U /* RSS Enable */ +#define TXGBE_RDB_RA_CTL_RSS_MASK 0xFFFF0000U +#define TXGBE_RDB_RA_CTL_RSS_IPV4_TCP 0x00010000U +#define TXGBE_RDB_RA_CTL_RSS_IPV4 0x00020000U +#define TXGBE_RDB_RA_CTL_RSS_IPV6 0x00100000U +#define TXGBE_RDB_RA_CTL_RSS_IPV6_TCP 0x00200000U +#define TXGBE_RDB_RA_CTL_RSS_IPV4_UDP 0x00400000U +#define TXGBE_RDB_RA_CTL_RSS_IPV6_UDP 0x00800000U + +enum txgbe_fdir_pballoc_type { + TXGBE_FDIR_PBALLOC_NONE = 0, + TXGBE_FDIR_PBALLOC_64K = 1, + TXGBE_FDIR_PBALLOC_128K = 2, + TXGBE_FDIR_PBALLOC_256K = 3, +}; + +/* Flow Director register values */ +#define TXGBE_RDB_FDIR_CTL_PBALLOC_64K 0x00000001U +#define TXGBE_RDB_FDIR_CTL_PBALLOC_128K 0x00000002U +#define TXGBE_RDB_FDIR_CTL_PBALLOC_256K 0x00000003U +#define TXGBE_RDB_FDIR_CTL_INIT_DONE 0x00000008U +#define TXGBE_RDB_FDIR_CTL_PERFECT_MATCH 0x00000010U +#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS 0x00000020U +#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS_ALWAYS 0x00000080U +#define TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT 8 +#define TXGBE_RDB_FDIR_CTL_FILTERMODE_SHIFT 21 +#define TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT 24 +#define TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT 20 +#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_MASK 0xF0000000U +#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT 28 + + +#define TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT 16 +#define TXGBE_RDB_FDIR_UDP_MSK_DPORTM_SHIFT 16 +#define TXGBE_RDB_FDIR_IP6_MSK_DIPM_SHIFT 16 +#define TXGBE_RDB_FDIR_OTHER_MSK_POOL 0x00000004U +#define TXGBE_RDB_FDIR_OTHER_MSK_L4P 0x00000008U +#define TXGBE_RDB_FDIR_OTHER_MSK_L3P 0x00000010U +#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_TYPE 0x00000020U +#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_OUTIP 0x00000040U +#define TXGBE_RDB_FDIR_OTHER_MSK_TUN 0x00000080U + +#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC 0x00000000U +#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_IP 0x00000001U +#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_HDR 0x00000002U +#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_PAYLOAD 0x00000003U +#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK 0x00000003U +#define TXGBE_RDB_FDIR_FLEX_CFG_MSK 0x00000004U +#define TXGBE_RDB_FDIR_FLEX_CFG_OFST 0x000000F8U +#define TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT 3 +#define TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT 8 + +#define TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT 16 +#define TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT 16 +#define TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT 15 +#define TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT 16 + +#define TXGBE_RDB_FDIR_CMD_CMD_MASK 0x00000003U +#define TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW 0x00000001U +#define TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW 0x00000002U +#define TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT 0x00000003U +#define TXGBE_RDB_FDIR_CMD_FILTER_VALID 0x00000004U +#define TXGBE_RDB_FDIR_CMD_FILTER_UPDATE 0x00000008U +#define TXGBE_RDB_FDIR_CMD_IPv6DMATCH 0x00000010U +#define TXGBE_RDB_FDIR_CMD_L4TYPE_UDP 0x00000020U +#define TXGBE_RDB_FDIR_CMD_L4TYPE_TCP 0x00000040U +#define TXGBE_RDB_FDIR_CMD_L4TYPE_SCTP 0x00000060U +#define TXGBE_RDB_FDIR_CMD_IPV6 0x00000080U +#define TXGBE_RDB_FDIR_CMD_CLEARHT 0x00000100U +#define TXGBE_RDB_FDIR_CMD_DROP 0x00000200U +#define TXGBE_RDB_FDIR_CMD_INT 0x00000400U +#define TXGBE_RDB_FDIR_CMD_LAST 0x00000800U +#define TXGBE_RDB_FDIR_CMD_COLLISION 0x00001000U +#define TXGBE_RDB_FDIR_CMD_QUEUE_EN 0x00008000U +#define TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT 5 +#define TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT 16 +#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER_SHIFT 23 +#define TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT 24 +#define TXGBE_RDB_FDIR_INIT_DONE_POLL 10 +#define TXGBE_RDB_FDIR_CMD_CMD_POLL 10 +#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER 0x00800000U +#define TXGBE_RDB_FDIR_DROP_QUEUE 127 +#define TXGBE_FDIR_INIT_DONE_POLL 10 + +/******************************* PSR Registers *******************************/ +/* psr control */ +#define TXGBE_PSR_CTL 0x15000 +#define TXGBE_PSR_VLAN_CTL 0x15088 +#define TXGBE_PSR_VM_CTL 0x151B0 +/* Header split receive */ +#define TXGBE_PSR_CTL_SW_EN 0x00040000U +#define TXGBE_PSR_CTL_RSC_DIS 0x00010000U +#define TXGBE_PSR_CTL_RSC_ACK 0x00020000U +#define TXGBE_PSR_CTL_PCSD 0x00002000U +#define TXGBE_PSR_CTL_IPPCSE 0x00001000U +#define TXGBE_PSR_CTL_BAM 0x00000400U +#define TXGBE_PSR_CTL_UPE 0x00000200U +#define TXGBE_PSR_CTL_MPE 0x00000100U +#define TXGBE_PSR_CTL_MFE 0x00000080U +#define TXGBE_PSR_CTL_MO 0x00000060U +#define TXGBE_PSR_CTL_TPE 0x00000010U +#define TXGBE_PSR_CTL_MO_SHIFT 5 +/* VT_CTL bitmasks */ +#define TXGBE_PSR_VM_CTL_DIS_DEFPL 0x20000000U /* disable default pool */ +#define TXGBE_PSR_VM_CTL_REPLEN 0x40000000U /* replication enabled */ +#define TXGBE_PSR_VM_CTL_POOL_SHIFT 7 +#define TXGBE_PSR_VM_CTL_POOL_MASK (0x3F << TXGBE_PSR_VM_CTL_POOL_SHIFT) +/* VLAN Control Bit Masks */ +#define TXGBE_PSR_VLAN_CTL_VET 0x0000FFFFU /* bits 0-15 */ +#define TXGBE_PSR_VLAN_CTL_CFI 0x10000000U /* bit 28 */ +#define TXGBE_PSR_VLAN_CTL_CFIEN 0x20000000U /* bit 29 */ +#define TXGBE_PSR_VLAN_CTL_VFE 0x40000000U /* bit 30 */ + +/* vm L2 contorl */ +#define TXGBE_PSR_VM_L2CTL(_i) (0x15600 + ((_i) * 4)) +/* VMOLR bitmasks */ +#define TXGBE_PSR_VM_L2CTL_LBDIS 0x00000002U /* disable loopback */ +#define TXGBE_PSR_VM_L2CTL_LLB 0x00000004U /* local pool loopback */ +#define TXGBE_PSR_VM_L2CTL_UPE 0x00000010U /* unicast promiscuous */ +#define TXGBE_PSR_VM_L2CTL_TPE 0x00000020U /* ETAG promiscuous */ +#define TXGBE_PSR_VM_L2CTL_VACC 0x00000040U /* accept nomatched vlan */ +#define TXGBE_PSR_VM_L2CTL_VPE 0x00000080U /* vlan promiscuous mode */ +#define TXGBE_PSR_VM_L2CTL_AUPE 0x00000100U /* accept untagged packets */ +#define TXGBE_PSR_VM_L2CTL_ROMPE 0x00000200U /*accept packets in MTA tbl*/ +#define TXGBE_PSR_VM_L2CTL_ROPE 0x00000400U /* accept packets in UC tbl*/ +#define TXGBE_PSR_VM_L2CTL_BAM 0x00000800U /* accept broadcast packets*/ +#define TXGBE_PSR_VM_L2CTL_MPE 0x00001000U /* multicast promiscuous */ + +/* etype switcher 1st stage */ +#define TXGBE_PSR_ETYPE_SWC(_i) (0x15128 + ((_i) * 4)) /* EType Queue Filter */ +/* ETYPE Queue Filter/Select Bit Masks */ +#define TXGBE_MAX_PSR_ETYPE_SWC_FILTERS 8 +#define TXGBE_PSR_ETYPE_SWC_FCOE 0x08000000U /* bit 27 */ +#define TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF 0x20000000U /* bit 29 */ +#define TXGBE_PSR_ETYPE_SWC_1588 0x40000000U /* bit 30 */ +#define TXGBE_PSR_ETYPE_SWC_FILTER_EN 0x80000000U /* bit 31 */ +#define TXGBE_PSR_ETYPE_SWC_POOL_ENABLE (1 << 26) /* bit 26 */ +#define TXGBE_PSR_ETYPE_SWC_POOL_SHIFT 20 +/* + * ETQF filter list: one static filter per filter consumer. This is + * to avoid filter collisions later. Add new filters + * here!! + * + * Current filters: + * EAPOL 802.1x (0x888e): Filter 0 + * FCoE (0x8906): Filter 2 + * 1588 (0x88f7): Filter 3 + * FIP (0x8914): Filter 4 + * LLDP (0x88CC): Filter 5 + * LACP (0x8809): Filter 6 + * FC (0x8808): Filter 7 + */ +#define TXGBE_PSR_ETYPE_SWC_FILTER_EAPOL 0 +#define TXGBE_PSR_ETYPE_SWC_FILTER_FCOE 2 +#define TXGBE_PSR_ETYPE_SWC_FILTER_1588 3 +#define TXGBE_PSR_ETYPE_SWC_FILTER_FIP 4 +#define TXGBE_PSR_ETYPE_SWC_FILTER_LLDP 5 +#define TXGBE_PSR_ETYPE_SWC_FILTER_LACP 6 +#define TXGBE_PSR_ETYPE_SWC_FILTER_FC 7 + +/* mcasst/ucast overflow tbl */ +#define TXGBE_PSR_MC_TBL(_i) (0x15200 + ((_i) * 4)) +#define TXGBE_PSR_UC_TBL(_i) (0x15400 + ((_i) * 4)) + +/* vlan tbl */ +#define TXGBE_PSR_VLAN_TBL(_i) (0x16000 + ((_i) * 4)) + +/* mac switcher */ +#define TXGBE_PSR_MAC_SWC_AD_L 0x16200 +#define TXGBE_PSR_MAC_SWC_AD_H 0x16204 +#define TXGBE_PSR_MAC_SWC_VM_L 0x16208 +#define TXGBE_PSR_MAC_SWC_VM_H 0x1620C +#define TXGBE_PSR_MAC_SWC_IDX 0x16210 +/* RAH */ +#define TXGBE_PSR_MAC_SWC_AD_H_AD(v) (((v) & 0xFFFF)) +#define TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(v) (((v) & 0x1) << 30) +#define TXGBE_PSR_MAC_SWC_AD_H_AV 0x80000000U +#define TXGBE_CLEAR_VMDQ_ALL 0xFFFFFFFFU + +/* vlan switch */ +#define TXGBE_PSR_VLAN_SWC 0x16220 +#define TXGBE_PSR_VLAN_SWC_VM_L 0x16224 +#define TXGBE_PSR_VLAN_SWC_VM_H 0x16228 +#define TXGBE_PSR_VLAN_SWC_IDX 0x16230 /* 64 vlan entries */ +/* VLAN pool filtering masks */ +#define TXGBE_PSR_VLAN_SWC_VIEN 0x80000000U /* filter is valid */ +#define TXGBE_PSR_VLAN_SWC_ENTRIES 64 +#define TXGBE_PSR_VLAN_SWC_VLANID_MASK 0x00000FFFU +#define TXGBE_ETHERNET_IEEE_VLAN_TYPE 0x8100 /* 802.1q protocol */ + +/* cloud switch */ +#define TXGBE_PSR_CL_SWC_DST0 0x16240 +#define TXGBE_PSR_CL_SWC_DST1 0x16244 +#define TXGBE_PSR_CL_SWC_DST2 0x16248 +#define TXGBE_PSR_CL_SWC_DST3 0x1624c +#define TXGBE_PSR_CL_SWC_KEY 0x16250 +#define TXGBE_PSR_CL_SWC_CTL 0x16254 +#define TXGBE_PSR_CL_SWC_VM_L 0x16258 +#define TXGBE_PSR_CL_SWC_VM_H 0x1625c +#define TXGBE_PSR_CL_SWC_IDX 0x16260 + +#define TXGBE_PSR_CL_SWC_CTL_VLD 0x80000000U +#define TXGBE_PSR_CL_SWC_CTL_DST_MSK 0x00000002U +#define TXGBE_PSR_CL_SWC_CTL_KEY_MSK 0x00000001U + + +/* FCoE SOF/EOF */ +#define TXGBE_PSR_FC_EOF 0x15158 +#define TXGBE_PSR_FC_SOF 0x151F8 +/* FCoE Filter Context Registers */ +#define TXGBE_PSR_FC_FLT_CTXT 0x15108 +#define TXGBE_PSR_FC_FLT_CTXT_VALID ((0x1)) /* Filter Context Valid */ +#define TXGBE_PSR_FC_FLT_CTXT_FIRST ((0x1) << 1) /* Filter First */ +#define TXGBE_PSR_FC_FLT_CTXT_WR ((0x1) << 2) /* Write/Read Context */ +#define TXGBE_PSR_FC_FLT_CTXT_SEQID(_v) (((_v) & 0xFF) << 8) /* Sequence ID */ +#define TXGBE_PSR_FC_FLT_CTXT_SEQCNT(_v) (((_v) & 0xFFFF) << 16) /* Seq Count */ + +#define TXGBE_PSR_FC_FLT_RW 0x15110 +#define TXGBE_PSR_FC_FLT_RW_FCSEL(_v) (((_v) & 0x1FF)) /* FC OX_ID: 11 bits */ +#define TXGBE_PSR_FC_FLT_RW_RVALDT ((0x1) << 13) /* Fast Re-Validation */ +#define TXGBE_PSR_FC_FLT_RW_WE ((0x1) << 14) /* Write Enable */ +#define TXGBE_PSR_FC_FLT_RW_RE ((0x1) << 15) /* Read Enable */ + +#define TXGBE_PSR_FC_PARAM 0x151D8 + +/* FCoE Receive Control */ +#define TXGBE_PSR_FC_CTL 0x15100 +#define TXGBE_PSR_FC_CTL_FCOELLI ((0x1)) /* Low latency interrupt */ +#define TXGBE_PSR_FC_CTL_SAVBAD ((0x1) << 1) /* Save Bad Frames */ +#define TXGBE_PSR_FC_CTL_FRSTRDH ((0x1) << 2) /* EN 1st Read Header */ +#define TXGBE_PSR_FC_CTL_LASTSEQH ((0x1) << 3) /* EN Last Header in Seq */ +#define TXGBE_PSR_FC_CTL_ALLH ((0x1) << 4) /* EN All Headers */ +#define TXGBE_PSR_FC_CTL_FRSTSEQH ((0x1) << 5) /* EN 1st Seq. Header */ +#define TXGBE_PSR_FC_CTL_ICRC ((0x1) << 6) /* Ignore Bad FC CRC */ +#define TXGBE_PSR_FC_CTL_FCCRCBO ((0x1) << 7) /* FC CRC Byte Ordering */ +#define TXGBE_PSR_FC_CTL_FCOEVER(_v) (((_v) & 0xF) << 8) /* FCoE Version */ + +/* Management */ +#define TXGBE_PSR_MNG_FIT_CTL 0x15820 +/* Management Bit Fields and Masks */ +#define TXGBE_PSR_MNG_FIT_CTL_MPROXYE 0x40000000U /* Management Proxy Enable*/ +#define TXGBE_PSR_MNG_FIT_CTL_RCV_TCO_EN 0x00020000U /* Rcv TCO packet enable */ +#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS 0x10000000U /* Ena BMC2OS and OS2BMC + *traffic */ +#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS_SHIFT 28 + +#define TXGBE_PSR_MNG_FLEX_SEL 0x1582C +#define TXGBE_PSR_MNG_FLEX_DW_L(_i) (0x15A00 + ((_i) * 16)) +#define TXGBE_PSR_MNG_FLEX_DW_H(_i) (0x15A04 + ((_i) * 16)) +#define TXGBE_PSR_MNG_FLEX_MSK(_i) (0x15A08 + ((_i) * 16)) + +/* mirror */ +#define TXGBE_PSR_MR_CTL(_i) (0x15B00 + ((_i) * 4)) +#define TXGBE_PSR_MR_VLAN_L(_i) (0x15B10 + ((_i) * 8)) +#define TXGBE_PSR_MR_VLAN_H(_i) (0x15B14 + ((_i) * 8)) +#define TXGBE_PSR_MR_VM_L(_i) (0x15B30 + ((_i) * 8)) +#define TXGBE_PSR_MR_VM_H(_i) (0x15B34 + ((_i) * 8)) + +/* 1588 */ +#define TXGBE_PSR_1588_CTL 0x15188 /* Rx Time Sync Control register - RW */ +#define TXGBE_PSR_1588_STMPL 0x151E8 /* Rx timestamp Low - RO */ +#define TXGBE_PSR_1588_STMPH 0x151A4 /* Rx timestamp High - RO */ +#define TXGBE_PSR_1588_ATTRL 0x151A0 /* Rx timestamp attribute low - RO */ +#define TXGBE_PSR_1588_ATTRH 0x151A8 /* Rx timestamp attribute high - RO */ +#define TXGBE_PSR_1588_MSGTYPE 0x15120 /* RX message type register low - RW */ +/* 1588 CTL Bit */ +#define TXGBE_PSR_1588_CTL_VALID 0x00000001U /* Rx timestamp valid */ +#define TXGBE_PSR_1588_CTL_TYPE_MASK 0x0000000EU /* Rx type mask */ +#define TXGBE_PSR_1588_CTL_TYPE_L2_V2 0x00 +#define TXGBE_PSR_1588_CTL_TYPE_L4_V1 0x02 +#define TXGBE_PSR_1588_CTL_TYPE_L2_L4_V2 0x04 +#define TXGBE_PSR_1588_CTL_TYPE_EVENT_V2 0x0A +#define TXGBE_PSR_1588_CTL_ENABLED 0x00000010U /* Rx Timestamp enabled*/ +/* 1588 msg type bit */ +#define TXGBE_PSR_1588_MSGTYPE_V1_CTRLT_MASK 0x000000FFU +#define TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG 0x00 +#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG 0x01 +#define TXGBE_PSR_1588_MSGTYPE_V1_FOLLOWUP_MSG 0x02 +#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_RESP_MSG 0x03 +#define TXGBE_PSR_1588_MSGTYPE_V1_MGMT_MSG 0x04 +#define TXGBE_PSR_1588_MSGTYPE_V2_MSGID_MASK 0x0000FF00U +#define TXGBE_PSR_1588_MSGTYPE_V2_SYNC_MSG 0x0000 +#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_REQ_MSG 0x0100 +#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_REQ_MSG 0x0200 +#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_RESP_MSG 0x0300 +#define TXGBE_PSR_1588_MSGTYPE_V2_FOLLOWUP_MSG 0x0800 +#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_RESP_MSG 0x0900 +#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_FOLLOWUP_MSG 0x0A00 +#define TXGBE_PSR_1588_MSGTYPE_V2_ANNOUNCE_MSG 0x0B00 +#define TXGBE_PSR_1588_MSGTYPE_V2_SIGNALLING_MSG 0x0C00 +#define TXGBE_PSR_1588_MSGTYPE_V2_MGMT_MSG 0x0D00 + +/* Wake up registers */ +#define TXGBE_PSR_WKUP_CTL 0x15B80 +#define TXGBE_PSR_WKUP_IPV 0x15B84 +#define TXGBE_PSR_LAN_FLEX_SEL 0x15B8C +#define TXGBE_PSR_WKUP_IP4TBL(_i) (0x15BC0 + ((_i) * 4)) +#define TXGBE_PSR_WKUP_IP6TBL(_i) (0x15BE0 + ((_i) * 4)) +#define TXGBE_PSR_LAN_FLEX_DW_L(_i) (0x15C00 + ((_i) * 16)) +#define TXGBE_PSR_LAN_FLEX_DW_H(_i) (0x15C04 + ((_i) * 16)) +#define TXGBE_PSR_LAN_FLEX_MSK(_i) (0x15C08 + ((_i) * 16)) +#define TXGBE_PSR_LAN_FLEX_CTL 0x15CFC +/* Wake Up Filter Control Bit */ +#define TXGBE_PSR_WKUP_CTL_LNKC 0x00000001U /* Link Status Change Wakeup Enable*/ +#define TXGBE_PSR_WKUP_CTL_MAG 0x00000002U /* Magic Packet Wakeup Enable */ +#define TXGBE_PSR_WKUP_CTL_EX 0x00000004U /* Directed Exact Wakeup Enable */ +#define TXGBE_PSR_WKUP_CTL_MC 0x00000008U /* Directed Multicast Wakeup Enable*/ +#define TXGBE_PSR_WKUP_CTL_BC 0x00000010U /* Broadcast Wakeup Enable */ +#define TXGBE_PSR_WKUP_CTL_ARP 0x00000020U /* ARP Request Packet Wakeup Enable*/ +#define TXGBE_PSR_WKUP_CTL_IPV4 0x00000040U /* Directed IPv4 Pkt Wakeup Enable */ +#define TXGBE_PSR_WKUP_CTL_IPV6 0x00000080U /* Directed IPv6 Pkt Wakeup Enable */ +#define TXGBE_PSR_WKUP_CTL_IGNORE_TCO 0x00008000U /* Ignore WakeOn TCO pkts */ +#define TXGBE_PSR_WKUP_CTL_FLX0 0x00010000U /* Flexible Filter 0 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX1 0x00020000U /* Flexible Filter 1 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX2 0x00040000U /* Flexible Filter 2 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX3 0x00080000U /* Flexible Filter 3 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX4 0x00100000U /* Flexible Filter 4 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX5 0x00200000U /* Flexible Filter 5 Ena */ +#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS 0x000F0000U /* Mask for 4 flex filters */ +#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_6 0x003F0000U /* Mask for 6 flex filters*/ +#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_8 0x00FF0000U /* Mask for 8 flex filters*/ +#define TXGBE_PSR_WKUP_CTL_FW_RST_WK 0x80000000U /* Ena wake on FW reset + * assertion */ +/* Mask for Ext. flex filters */ +#define TXGBE_PSR_WKUP_CTL_EXT_FLX_FILTERS 0x00300000U +#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS 0x000F00FFU /* Mask all 4 flex filters*/ +#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_6 0x003F00FFU /* Mask all 6 flex filters*/ +#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_8 0x00FF00FFU /* Mask all 8 flex filters*/ +#define TXGBE_PSR_WKUP_CTL_FLX_OFFSET 16 /* Offset to the Flex Filters bits*/ + +#define TXGBE_PSR_MAX_SZ 0x15020 + +/****************************** TDB ******************************************/ +#define TXGBE_TDB_RFCS 0x1CE00 +#define TXGBE_TDB_PB_SZ(_i) (0x1CC00 + ((_i) * 4)) /* 8 of these */ +#define TXGBE_TDB_MNG_TC 0x1CD10 +#define TXGBE_TDB_PRB_CTL 0x17010 +#define TXGBE_TDB_PBRARB_CTL 0x1CD00 +#define TXGBE_TDB_UP2TC 0x1C800 +#define TXGBE_TDB_PBRARB_CFG(_i) (0x1CD20 + ((_i) * 4)) /* 8 of (0-7) */ + +#define TXGBE_TDB_PB_SZ_20KB 0x00005000U /* 20KB Packet Buffer */ +#define TXGBE_TDB_PB_SZ_40KB 0x0000A000U /* 40KB Packet Buffer */ +#define TXGBE_TDB_PB_SZ_MAX 0x00028000U /* 160KB Packet Buffer */ +#define TXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */ +#define TXGBE_MAX_PB 8 + +/****************************** TSEC *****************************************/ +/* Security Control Registers */ +#define TXGBE_TSC_CTL 0x1D000 +#define TXGBE_TSC_ST 0x1D004 +#define TXGBE_TSC_BUF_AF 0x1D008 +#define TXGBE_TSC_BUF_AE 0x1D00C +#define TXGBE_TSC_PRB_CTL 0x1D010 +#define TXGBE_TSC_MIN_IFG 0x1D020 +/* Security Bit Fields and Masks */ +#define TXGBE_TSC_CTL_SECTX_DIS 0x00000001U +#define TXGBE_TSC_CTL_TX_DIS 0x00000002U +#define TXGBE_TSC_CTL_STORE_FORWARD 0x00000004U +#define TXGBE_TSC_CTL_IV_MSK_EN 0x00000008U +#define TXGBE_TSC_ST_SECTX_RDY 0x00000001U +#define TXGBE_TSC_ST_OFF_DIS 0x00000002U +#define TXGBE_TSC_ST_ECC_TXERR 0x00000004U + +/* LinkSec (MacSec) Registers */ +#define TXGBE_TSC_LSEC_CAP 0x1D200 +#define TXGBE_TSC_LSEC_CTL 0x1D204 +#define TXGBE_TSC_LSEC_SCI_L 0x1D208 +#define TXGBE_TSC_LSEC_SCI_H 0x1D20C +#define TXGBE_TSC_LSEC_SA 0x1D210 +#define TXGBE_TSC_LSEC_PKTNUM0 0x1D214 +#define TXGBE_TSC_LSEC_PKTNUM1 0x1D218 +#define TXGBE_TSC_LSEC_KEY0(_n) 0x1D21C +#define TXGBE_TSC_LSEC_KEY1(_n) 0x1D22C +#define TXGBE_TSC_LSEC_UNTAG_PKT 0x1D23C +#define TXGBE_TSC_LSEC_ENC_PKT 0x1D240 +#define TXGBE_TSC_LSEC_PROT_PKT 0x1D244 +#define TXGBE_TSC_LSEC_ENC_OCTET 0x1D248 +#define TXGBE_TSC_LSEC_PROT_OCTET 0x1D24C + +/* IpSec Registers */ +#define TXGBE_TSC_IPS_IDX 0x1D100 +#define TXGBE_TSC_IPS_IDX_WT 0x80000000U +#define TXGBE_TSC_IPS_IDX_RD 0x40000000U +#define TXGBE_TSC_IPS_IDX_SD_IDX 0x0U /* */ +#define TXGBE_TSC_IPS_IDX_EN 0x00000001U +#define TXGBE_TSC_IPS_SALT 0x1D104 +#define TXGBE_TSC_IPS_KEY(i) (0x1D108 + ((i) * 4)) + +/* 1588 */ +#define TXGBE_TSC_1588_CTL 0x1D400 /* Tx Time Sync Control reg */ +#define TXGBE_TSC_1588_STMPL 0x1D404 /* Tx timestamp value Low */ +#define TXGBE_TSC_1588_STMPH 0x1D408 /* Tx timestamp value High */ +#define TXGBE_TSC_1588_SYSTIML 0x1D40C /* System time register Low */ +#define TXGBE_TSC_1588_SYSTIMH 0x1D410 /* System time register High */ +#define TXGBE_TSC_1588_INC 0x1D414 /* Increment attributes reg */ +#define TXGBE_TSC_1588_INC_IV(v) (((v) & 0xFFFFFF)) +#define TXGBE_TSC_1588_INC_IP(v) (((v) & 0xFF) << 24) +#define TXGBE_TSC_1588_INC_IVP(v, p) \ + (((v) & 0xFFFFFF) | TXGBE_TSC_1588_INC_IP(p)) + +#define TXGBE_TSC_1588_ADJL 0x1D418 /* Time Adjustment Offset reg Low */ +#define TXGBE_TSC_1588_ADJH 0x1D41C /* Time Adjustment Offset reg High*/ +/* 1588 fields */ +#define TXGBE_TSC_1588_CTL_VALID 0x00000001U /* Tx timestamp valid */ +#define TXGBE_TSC_1588_CTL_ENABLED 0x00000010U /* Tx timestamping enabled */ + + +/********************************* RSEC **************************************/ +/* general rsec */ +#define TXGBE_RSC_CTL 0x17000 +#define TXGBE_RSC_ST 0x17004 +/* general rsec fields */ +#define TXGBE_RSC_CTL_SECRX_DIS 0x00000001U +#define TXGBE_RSC_CTL_RX_DIS 0x00000002U +#define TXGBE_RSC_CTL_CRC_STRIP 0x00000004U +#define TXGBE_RSC_CTL_IV_MSK_EN 0x00000008U +#define TXGBE_RSC_CTL_SAVE_MAC_ERR 0x00000040U +#define TXGBE_RSC_ST_RSEC_RDY 0x00000001U +#define TXGBE_RSC_ST_RSEC_OFLD_DIS 0x00000002U +#define TXGBE_RSC_ST_ECC_RXERR 0x00000004U + +/* link sec */ +#define TXGBE_RSC_LSEC_CAP 0x17200 +#define TXGBE_RSC_LSEC_CTL 0x17204 +#define TXGBE_RSC_LSEC_SCI_L 0x17208 +#define TXGBE_RSC_LSEC_SCI_H 0x1720C +#define TXGBE_RSC_LSEC_SA0 0x17210 +#define TXGBE_RSC_LSEC_SA1 0x17214 +#define TXGBE_RSC_LSEC_PKNUM0 0x17218 +#define TXGBE_RSC_LSEC_PKNUM1 0x1721C +#define TXGBE_RSC_LSEC_KEY0(_n) 0x17220 +#define TXGBE_RSC_LSEC_KEY1(_n) 0x17230 +#define TXGBE_RSC_LSEC_UNTAG_PKT 0x17240 +#define TXGBE_RSC_LSEC_DEC_OCTET 0x17244 +#define TXGBE_RSC_LSEC_VLD_OCTET 0x17248 +#define TXGBE_RSC_LSEC_BAD_PKT 0x1724C +#define TXGBE_RSC_LSEC_NOSCI_PKT 0x17250 +#define TXGBE_RSC_LSEC_UNSCI_PKT 0x17254 +#define TXGBE_RSC_LSEC_UNCHK_PKT 0x17258 +#define TXGBE_RSC_LSEC_DLY_PKT 0x1725C +#define TXGBE_RSC_LSEC_LATE_PKT 0x17260 +#define TXGBE_RSC_LSEC_OK_PKT(_n) 0x17264 +#define TXGBE_RSC_LSEC_INV_PKT(_n) 0x17274 +#define TXGBE_RSC_LSEC_BADSA_PKT 0x1727C +#define TXGBE_RSC_LSEC_INVSA_PKT 0x17280 + +/* ipsec */ +#define TXGBE_RSC_IPS_IDX 0x17100 +#define TXGBE_RSC_IPS_IDX_WT 0x80000000U +#define TXGBE_RSC_IPS_IDX_RD 0x40000000U +#define TXGBE_RSC_IPS_IDX_TB_IDX 0x0U /* */ +#define TXGBE_RSC_IPS_IDX_TB_IP 0x00000002U +#define TXGBE_RSC_IPS_IDX_TB_SPI 0x00000004U +#define TXGBE_RSC_IPS_IDX_TB_KEY 0x00000006U +#define TXGBE_RSC_IPS_IDX_EN 0x00000001U +#define TXGBE_RSC_IPS_IP(i) (0x17104 + ((i) * 4)) +#define TXGBE_RSC_IPS_SPI 0x17114 +#define TXGBE_RSC_IPS_IP_IDX 0x17118 +#define TXGBE_RSC_IPS_KEY(i) (0x1711C + ((i) * 4)) +#define TXGBE_RSC_IPS_SALT 0x1712C +#define TXGBE_RSC_IPS_MODE 0x17130 +#define TXGBE_RSC_IPS_MODE_IPV6 0x00000010 +#define TXGBE_RSC_IPS_MODE_DEC 0x00000008 +#define TXGBE_RSC_IPS_MODE_ESP 0x00000004 +#define TXGBE_RSC_IPS_MODE_AH 0x00000002 +#define TXGBE_RSC_IPS_MODE_VALID 0x00000001 + +/************************************** ETH PHY ******************************/ +#define TXGBE_XPCS_IDA_ADDR 0x13000 +#define TXGBE_XPCS_IDA_DATA 0x13004 +#define TXGBE_ETHPHY_IDA_ADDR 0x13008 +#define TXGBE_ETHPHY_IDA_DATA 0x1300C + +/************************************** MNG ********************************/ +#define TXGBE_MNG_FW_SM 0x1E000 +#define TXGBE_MNG_SW_SM 0x1E004 +#define TXGBE_MNG_SWFW_SYNC 0x1E008 +#define TXGBE_MNG_MBOX 0x1E100 +#define TXGBE_MNG_MBOX_CTL 0x1E044 +#define TXGBE_MNG_OS2BMC_CNT 0x1E094 +#define TXGBE_MNG_BMC2OS_CNT 0x1E090 + +/* Firmware Semaphore Register */ +#define TXGBE_MNG_FW_SM_MODE_MASK 0xE +#define TXGBE_MNG_FW_SM_TS_ENABLED 0x1 +/* SW Semaphore Register bitmasks */ +#define TXGBE_MNG_SW_SM_SM 0x00000001U /* software Semaphore */ + +/* SW_FW_SYNC definitions */ +#define TXGBE_MNG_SWFW_SYNC_SW_PHY 0x0001 +#define TXGBE_MNG_SWFW_SYNC_SW_FLASH 0x0008 +#define TXGBE_MNG_SWFW_SYNC_SW_MB 0x0004 + +#define TXGBE_MNG_MBOX_CTL_SWRDY 0x1 +#define TXGBE_MNG_MBOX_CTL_SWACK 0x2 +#define TXGBE_MNG_MBOX_CTL_FWRDY 0x4 +#define TXGBE_MNG_MBOX_CTL_FWACK 0x8 + +/************************************* ETH MAC *****************************/ +#define TXGBE_MAC_TX_CFG 0x11000 +#define TXGBE_MAC_RX_CFG 0x11004 +#define TXGBE_MAC_PKT_FLT 0x11008 +#define TXGBE_MAC_PKT_FLT_PR (0x1) /* promiscuous mode */ +#define TXGBE_MAC_PKT_FLT_RA (0x80000000) /* receive all */ +#define TXGBE_MAC_WDG_TIMEOUT 0x1100C +#define TXGBE_MAC_RX_FLOW_CTRL 0x11090 +#define TXGBE_MAC_ADDRESS0_HIGH 0x11300 +#define TXGBE_MAC_ADDRESS0_LOW 0x11304 + +#define TXGBE_MAC_TX_CFG_TE 0x00000001U +#define TXGBE_MAC_TX_CFG_SPEED_MASK 0x60000000U +#define TXGBE_MAC_TX_CFG_SPEED_10G 0x00000000U +#define TXGBE_MAC_TX_CFG_SPEED_1G 0x60000000U +#define TXGBE_MAC_RX_CFG_RE 0x00000001U +#define TXGBE_MAC_RX_CFG_JE 0x00000100U +#define TXGBE_MAC_RX_CFG_LM 0x00000400U +#define TXGBE_MAC_WDG_TIMEOUT_PWE 0x00000100U +#define TXGBE_MAC_WDG_TIMEOUT_WTO_MASK 0x0000000FU +#define TXGBE_MAC_WDG_TIMEOUT_WTO_DELTA 2 + +#define TXGBE_MAC_RX_FLOW_CTRL_RFE 0x00000001U /* receive fc enable */ +#define TXGBE_MAC_RX_FLOW_CTRL_PFCE 0x00000100U /* pfc enable */ + +#define TXGBE_MSCA 0x11200 +#define TXGBE_MSCA_RA(v) ((0xFFFF & (v))) +#define TXGBE_MSCA_PA(v) ((0x1F & (v)) << 16) +#define TXGBE_MSCA_DA(v) ((0x1F & (v)) << 21) +#define TXGBE_MSCC 0x11204 +#define TXGBE_MSCC_DATA(v) ((0xFFFF & (v))) +#define TXGBE_MSCC_CMD(v) ((0x3 & (v)) << 16) +enum TXGBE_MSCA_CMD_value { + TXGBE_MSCA_CMD_RSV = 0, + TXGBE_MSCA_CMD_WRITE, + TXGBE_MSCA_CMD_POST_READ, + TXGBE_MSCA_CMD_READ, +}; +#define TXGBE_MSCC_SADDR ((0x1U) << 18) +#define TXGBE_MSCC_CR(v) ((0x8U & (v)) << 19) +#define TXGBE_MSCC_BUSY ((0x1U) << 22) + +/* EEE registers */ + +/* statistic */ +#define TXGBE_MAC_LXONRXC 0x11E0C +#define TXGBE_MAC_LXOFFRXC 0x11988 +#define TXGBE_MAC_PXONRXC(_i) (0x11E30 + ((_i) * 4)) /* 8 of these */ +#define TXGBE_MAC_PXOFFRXC 0x119DC +#define TXGBE_RX_BC_FRAMES_GOOD_LOW 0x11918 +#define TXGBE_RX_CRC_ERROR_FRAMES_LOW 0x11928 +#define TXGBE_RX_LEN_ERROR_FRAMES_LOW 0x11978 +#define TXGBE_RX_UNDERSIZE_FRAMES_GOOD 0x11938 +#define TXGBE_RX_OVERSIZE_FRAMES_GOOD 0x1193C +#define TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW 0x11900 +#define TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW 0x1181C +#define TXGBE_TX_MC_FRAMES_GOOD_LOW 0x1182C +#define TXGBE_TX_BC_FRAMES_GOOD_LOW 0x11824 +#define TXGBE_MMC_CONTROL 0x11800 +#define TXGBE_MMC_CONTROL_RSTONRD 0x4 /* reset on read */ +#define TXGBE_MMC_CONTROL_UP 0x700 + + +/********************************* BAR registers ***************************/ +/* Interrupt Registers */ +#define TXGBE_BME_CTL 0x12020 +#define TXGBE_PX_MISC_IC 0x100 +#define TXGBE_PX_MISC_ICS 0x104 +#define TXGBE_PX_MISC_IEN 0x108 +#define TXGBE_PX_MISC_IVAR 0x4FC +#define TXGBE_PX_GPIE 0x118 +#define TXGBE_PX_ISB_ADDR_L 0x160 +#define TXGBE_PX_ISB_ADDR_H 0x164 +#define TXGBE_PX_TCP_TIMER 0x170 +#define TXGBE_PX_ITRSEL 0x180 +#define TXGBE_PX_IC(_i) (0x120 + (_i) * 4) +#define TXGBE_PX_ICS(_i) (0x130 + (_i) * 4) +#define TXGBE_PX_IMS(_i) (0x140 + (_i) * 4) +#define TXGBE_PX_IMC(_i) (0x150 + (_i) * 4) +#define TXGBE_PX_IVAR(_i) (0x500 + (_i) * 4) +#define TXGBE_PX_ITR(_i) (0x200 + (_i) * 4) +#define TXGBE_PX_TRANSACTION_PENDING 0x168 +#define TXGBE_PX_INTA 0x110 + +/* Interrupt register bitmasks */ +/* Extended Interrupt Cause Read */ +#define TXGBE_PX_MISC_IC_ETH_LKDN 0x00000100U /* eth link down */ +#define TXGBE_PX_MISC_IC_DEV_RST 0x00000400U /* device reset event */ +#define TXGBE_PX_MISC_IC_TIMESYNC 0x00000800U /* time sync */ +#define TXGBE_PX_MISC_IC_STALL 0x00001000U /* trans or recv path is + * stalled */ +#define TXGBE_PX_MISC_IC_LINKSEC 0x00002000U /* Tx LinkSec require key + * exchange */ +#define TXGBE_PX_MISC_IC_RX_MISS 0x00004000U /* Packet Buffer Overrun */ +#define TXGBE_PX_MISC_IC_FLOW_DIR 0x00008000U /* FDir Exception */ +#define TXGBE_PX_MISC_IC_I2C 0x00010000U /* I2C interrupt */ +#define TXGBE_PX_MISC_IC_ETH_EVENT 0x00020000U /* err reported by MAC except + * eth link down */ +#define TXGBE_PX_MISC_IC_ETH_LK 0x00040000U /* link up */ +#define TXGBE_PX_MISC_IC_ETH_AN 0x00080000U /* link auto-nego done */ +#define TXGBE_PX_MISC_IC_INT_ERR 0x00100000U /* integrity error */ +#define TXGBE_PX_MISC_IC_SPI 0x00200000U /* SPI interface */ +#define TXGBE_PX_MISC_IC_VF_MBOX 0x00800000U /* VF-PF message box */ +#define TXGBE_PX_MISC_IC_GPIO 0x04000000U /* GPIO interrupt */ +#define TXGBE_PX_MISC_IC_PCIE_REQ_ERR 0x08000000U /* pcie request error int */ +#define TXGBE_PX_MISC_IC_OVER_HEAT 0x10000000U /* overheat detection */ +#define TXGBE_PX_MISC_IC_PROBE_MATCH 0x20000000U /* probe match */ +#define TXGBE_PX_MISC_IC_MNG_HOST_MBOX 0x40000000U /* mng mailbox */ +#define TXGBE_PX_MISC_IC_TIMER 0x80000000U /* tcp timer */ + +/* Extended Interrupt Cause Set */ +#define TXGBE_PX_MISC_ICS_ETH_LKDN 0x00000100U +#define TXGBE_PX_MISC_ICS_DEV_RST 0x00000400U +#define TXGBE_PX_MISC_ICS_TIMESYNC 0x00000800U +#define TXGBE_PX_MISC_ICS_STALL 0x00001000U +#define TXGBE_PX_MISC_ICS_LINKSEC 0x00002000U +#define TXGBE_PX_MISC_ICS_RX_MISS 0x00004000U +#define TXGBE_PX_MISC_ICS_FLOW_DIR 0x00008000U +#define TXGBE_PX_MISC_ICS_I2C 0x00010000U +#define TXGBE_PX_MISC_ICS_ETH_EVENT 0x00020000U +#define TXGBE_PX_MISC_ICS_ETH_LK 0x00040000U +#define TXGBE_PX_MISC_ICS_ETH_AN 0x00080000U +#define TXGBE_PX_MISC_ICS_INT_ERR 0x00100000U +#define TXGBE_PX_MISC_ICS_SPI 0x00200000U +#define TXGBE_PX_MISC_ICS_VF_MBOX 0x00800000U +#define TXGBE_PX_MISC_ICS_GPIO 0x04000000U +#define TXGBE_PX_MISC_ICS_PCIE_REQ_ERR 0x08000000U +#define TXGBE_PX_MISC_ICS_OVER_HEAT 0x10000000U +#define TXGBE_PX_MISC_ICS_PROBE_MATCH 0x20000000U +#define TXGBE_PX_MISC_ICS_MNG_HOST_MBOX 0x40000000U +#define TXGBE_PX_MISC_ICS_TIMER 0x80000000U + +/* Extended Interrupt Enable Set */ +#define TXGBE_PX_MISC_IEN_ETH_LKDN 0x00000100U +#define TXGBE_PX_MISC_IEN_DEV_RST 0x00000400U +#define TXGBE_PX_MISC_IEN_TIMESYNC 0x00000800U +#define TXGBE_PX_MISC_IEN_STALL 0x00001000U +#define TXGBE_PX_MISC_IEN_LINKSEC 0x00002000U +#define TXGBE_PX_MISC_IEN_RX_MISS 0x00004000U +#define TXGBE_PX_MISC_IEN_FLOW_DIR 0x00008000U +#define TXGBE_PX_MISC_IEN_I2C 0x00010000U +#define TXGBE_PX_MISC_IEN_ETH_EVENT 0x00020000U +#define TXGBE_PX_MISC_IEN_ETH_LK 0x00040000U +#define TXGBE_PX_MISC_IEN_ETH_AN 0x00080000U +#define TXGBE_PX_MISC_IEN_INT_ERR 0x00100000U +#define TXGBE_PX_MISC_IEN_SPI 0x00200000U +#define TXGBE_PX_MISC_IEN_VF_MBOX 0x00800000U +#define TXGBE_PX_MISC_IEN_GPIO 0x04000000U +#define TXGBE_PX_MISC_IEN_PCIE_REQ_ERR 0x08000000U +#define TXGBE_PX_MISC_IEN_OVER_HEAT 0x10000000U +#define TXGBE_PX_MISC_IEN_PROBE_MATCH 0x20000000U +#define TXGBE_PX_MISC_IEN_MNG_HOST_MBOX 0x40000000U +#define TXGBE_PX_MISC_IEN_TIMER 0x80000000U + +#define TXGBE_PX_MISC_IEN_MASK ( \ + TXGBE_PX_MISC_IEN_ETH_LKDN| \ + TXGBE_PX_MISC_IEN_DEV_RST | \ + TXGBE_PX_MISC_IEN_ETH_EVENT | \ + TXGBE_PX_MISC_IEN_ETH_LK | \ + TXGBE_PX_MISC_IEN_ETH_AN | \ + TXGBE_PX_MISC_IEN_INT_ERR | \ + TXGBE_PX_MISC_IEN_VF_MBOX | \ + TXGBE_PX_MISC_IEN_GPIO | \ + TXGBE_PX_MISC_IEN_MNG_HOST_MBOX | \ + TXGBE_PX_MISC_IEN_STALL | \ + TXGBE_PX_MISC_IEN_PCIE_REQ_ERR | \ + TXGBE_PX_MISC_IEN_TIMER) + +/* General purpose Interrupt Enable */ +#define TXGBE_PX_GPIE_MODEL 0x00000001U +#define TXGBE_PX_GPIE_IMEN 0x00000002U +#define TXGBE_PX_GPIE_LL_INTERVAL 0x000000F0U +#define TXGBE_PX_GPIE_RSC_DELAY 0x00000700U + +/* Interrupt Vector Allocation Registers */ +#define TXGBE_PX_IVAR_REG_NUM 64 +#define TXGBE_PX_IVAR_ALLOC_VAL 0x80 /* Interrupt Allocation valid */ + +#define TXGBE_MAX_INT_RATE 500000 +#define TXGBE_MIN_INT_RATE 980 +#define TXGBE_MAX_EITR 0x00000FF8U +#define TXGBE_MIN_EITR 8 +#define TXGBE_PX_ITR_ITR_INT_MASK 0x00000FF8U +#define TXGBE_PX_ITR_LLI_CREDIT 0x001f0000U +#define TXGBE_PX_ITR_LLI_MOD 0x00008000U +#define TXGBE_PX_ITR_CNT_WDIS 0x80000000U +#define TXGBE_PX_ITR_ITR_CNT 0x0FE00000U + +/* transmit DMA Registers */ +#define TXGBE_PX_TR_BAL(_i) (0x03000 + ((_i) * 0x40)) +#define TXGBE_PX_TR_BAH(_i) (0x03004 + ((_i) * 0x40)) +#define TXGBE_PX_TR_WP(_i) (0x03008 + ((_i) * 0x40)) +#define TXGBE_PX_TR_RP(_i) (0x0300C + ((_i) * 0x40)) +#define TXGBE_PX_TR_CFG(_i) (0x03010 + ((_i) * 0x40)) +/* Transmit Config masks */ +#define TXGBE_PX_TR_CFG_ENABLE (1) /* Ena specific Tx Queue */ +#define TXGBE_PX_TR_CFG_TR_SIZE_SHIFT 1 /* tx desc number per ring */ +#define TXGBE_PX_TR_CFG_SWFLSH (1 << 26) /* Tx Desc. wr-bk flushing */ +#define TXGBE_PX_TR_CFG_WTHRESH_SHIFT 16 /* shift to WTHRESH bits */ +#define TXGBE_PX_TR_CFG_THRE_SHIFT 8 + + +#define TXGBE_PX_TR_RPn(q_per_pool, vf_number, vf_q_index) \ + (TXGBE_PX_TR_RP((q_per_pool)*(vf_number) + (vf_q_index))) +#define TXGBE_PX_TR_WPn(q_per_pool, vf_number, vf_q_index) \ + (TXGBE_PX_TR_WP((q_per_pool)*(vf_number) + (vf_q_index))) + +/* Receive DMA Registers */ +#define TXGBE_PX_RR_BAL(_i) (0x01000 + ((_i) * 0x40)) +#define TXGBE_PX_RR_BAH(_i) (0x01004 + ((_i) * 0x40)) +#define TXGBE_PX_RR_WP(_i) (0x01008 + ((_i) * 0x40)) +#define TXGBE_PX_RR_RP(_i) (0x0100C + ((_i) * 0x40)) +#define TXGBE_PX_RR_CFG(_i) (0x01010 + ((_i) * 0x40)) +/* PX_RR_CFG bit definitions */ +#define TXGBE_PX_RR_CFG_RR_SIZE_SHIFT 1 +#define TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT 2 /* so many KBs */ +#define TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT 6 /* 64byte resolution (>> 6) + * + at bit 8 offset (<< 12) + * = (<< 6) + */ +#define TXGBE_PX_RR_CFG_DROP_EN 0x40000000U +#define TXGBE_PX_RR_CFG_VLAN 0x80000000U +#define TXGBE_PX_RR_CFG_RSC 0x20000000U +#define TXGBE_PX_RR_CFG_CNTAG 0x10000000U +#define TXGBE_PX_RR_CFG_RSC_CNT_MD 0x08000000U +#define TXGBE_PX_RR_CFG_SPLIT_MODE 0x04000000U +#define TXGBE_PX_RR_CFG_STALL 0x02000000U +#define TXGBE_PX_RR_CFG_MAX_RSCBUF_1 0x00000000U +#define TXGBE_PX_RR_CFG_MAX_RSCBUF_4 0x00800000U +#define TXGBE_PX_RR_CFG_MAX_RSCBUF_8 0x01000000U +#define TXGBE_PX_RR_CFG_MAX_RSCBUF_16 0x01800000U +#define TXGBE_PX_RR_CFG_RR_THER 0x00070000U +#define TXGBE_PX_RR_CFG_RR_THER_SHIFT 16 + +#define TXGBE_PX_RR_CFG_RR_HDR_SZ 0x0000F000U +#define TXGBE_PX_RR_CFG_RR_BUF_SZ 0x00000F00U +#define TXGBE_PX_RR_CFG_RR_SZ 0x0000007EU +#define TXGBE_PX_RR_CFG_RR_EN 0x00000001U + +/* statistic */ +#define TXGBE_PX_MPRC(_i) (0x1020 + ((_i) * 64)) +#define TXGBE_VX_GPRC(_i) (0x01014 + (0x40 * (_i))) +#define TXGBE_VX_GPTC(_i) (0x03014 + (0x40 * (_i))) +#define TXGBE_VX_GORC_LSB(_i) (0x01018 + (0x40 * (_i))) +#define TXGBE_VX_GORC_MSB(_i) (0x0101C + (0x40 * (_i))) +#define TXGBE_VX_GOTC_LSB(_i) (0x03018 + (0x40 * (_i))) +#define TXGBE_VX_GOTC_MSB(_i) (0x0301C + (0x40 * (_i))) +#define TXGBE_VX_MPRC(_i) (0x01020 + (0x40 * (_i))) + +#define TXGBE_PX_GPRC 0x12504 +#define TXGBE_PX_GPTC 0x18308 + +#define TXGBE_PX_GORC_LSB 0x12508 +#define TXGBE_PX_GORC_MSB 0x1250C + +#define TXGBE_PX_GOTC_LSB 0x1830C +#define TXGBE_PX_GOTC_MSB 0x18310 + +/************************************* Stats registers ************************/ +#define TXGBE_FCCRC 0x15160 /* Num of Good Eth CRC w/ Bad FC CRC */ +#define TXGBE_FCOERPDC 0x12514 /* FCoE Rx Packets Dropped Count */ +#define TXGBE_FCLAST 0x12518 /* FCoE Last Error Count */ +#define TXGBE_FCOEPRC 0x15164 /* Number of FCoE Packets Received */ +#define TXGBE_FCOEDWRC 0x15168 /* Number of FCoE DWords Received */ +#define TXGBE_FCOEPTC 0x18318 /* Number of FCoE Packets Transmitted */ +#define TXGBE_FCOEDWTC 0x1831C /* Number of FCoE DWords Transmitted */ + +/*************************** Flash region definition *************************/ +/* EEC Register */ +#define TXGBE_EEC_SK 0x00000001U /* EEPROM Clock */ +#define TXGBE_EEC_CS 0x00000002U /* EEPROM Chip Select */ +#define TXGBE_EEC_DI 0x00000004U /* EEPROM Data In */ +#define TXGBE_EEC_DO 0x00000008U /* EEPROM Data Out */ +#define TXGBE_EEC_FWE_MASK 0x00000030U /* FLASH Write Enable */ +#define TXGBE_EEC_FWE_DIS 0x00000010U /* Disable FLASH writes */ +#define TXGBE_EEC_FWE_EN 0x00000020U /* Enable FLASH writes */ +#define TXGBE_EEC_FWE_SHIFT 4 +#define TXGBE_EEC_REQ 0x00000040U /* EEPROM Access Request */ +#define TXGBE_EEC_GNT 0x00000080U /* EEPROM Access Grant */ +#define TXGBE_EEC_PRES 0x00000100U /* EEPROM Present */ +#define TXGBE_EEC_ARD 0x00000200U /* EEPROM Auto Read Done */ +#define TXGBE_EEC_FLUP 0x00800000U /* Flash update command */ +#define TXGBE_EEC_SEC1VAL 0x02000000U /* Sector 1 Valid */ +#define TXGBE_EEC_FLUDONE 0x04000000U /* Flash update done */ +/* EEPROM Addressing bits based on type (0-small, 1-large) */ +#define TXGBE_EEC_ADDR_SIZE 0x00000400U +#define TXGBE_EEC_SIZE 0x00007800U /* EEPROM Size */ +#define TXGBE_EERD_MAX_ADDR 0x00003FFFU /* EERD alows 14 bits for addr. */ + +#define TXGBE_EEC_SIZE_SHIFT 11 +#define TXGBE_EEPROM_WORD_SIZE_SHIFT 6 +#define TXGBE_EEPROM_OPCODE_BITS 8 + +/* FLA Register */ +#define TXGBE_FLA_LOCKED 0x00000040U + +/* Part Number String Length */ +#define TXGBE_PBANUM_LENGTH 32 + +/* Checksum and EEPROM pointers */ +#define TXGBE_PBANUM_PTR_GUARD 0xFAFA +#define TXGBE_EEPROM_CHECKSUM 0x2F +#define TXGBE_EEPROM_SUM 0xBABA +#define TXGBE_ATLAS0_CONFIG_PTR 0x04 +#define TXGBE_PHY_PTR 0x04 +#define TXGBE_ATLAS1_CONFIG_PTR 0x05 +#define TXGBE_OPTION_ROM_PTR 0x05 +#define TXGBE_PCIE_GENERAL_PTR 0x06 +#define TXGBE_PCIE_CONFIG0_PTR 0x07 +#define TXGBE_PCIE_CONFIG1_PTR 0x08 +#define TXGBE_CORE0_PTR 0x09 +#define TXGBE_CORE1_PTR 0x0A +#define TXGBE_MAC0_PTR 0x0B +#define TXGBE_MAC1_PTR 0x0C +#define TXGBE_CSR0_CONFIG_PTR 0x0D +#define TXGBE_CSR1_CONFIG_PTR 0x0E +#define TXGBE_PCIE_ANALOG_PTR 0x02 +#define TXGBE_SHADOW_RAM_SIZE 0x4000 +#define TXGBE_TXGBE_PCIE_GENERAL_SIZE 0x24 +#define TXGBE_PCIE_CONFIG_SIZE 0x08 +#define TXGBE_EEPROM_LAST_WORD 0x800 +#define TXGBE_FW_PTR 0x0F +#define TXGBE_PBANUM0_PTR 0x05 +#define TXGBE_PBANUM1_PTR 0x06 +#define TXGBE_ALT_MAC_ADDR_PTR 0x37 +#define TXGBE_FREE_SPACE_PTR 0x3E +#define TXGBE_SW_REGION_PTR 0x1C + +#define TXGBE_SAN_MAC_ADDR_PTR 0x18 +#define TXGBE_DEVICE_CAPS 0x1C +#define TXGBE_EEPROM_VERSION_L 0x1D +#define TXGBE_EEPROM_VERSION_H 0x1E +#define TXGBE_ISCSI_BOOT_CONFIG 0x07 + +#define TXGBE_SERIAL_NUMBER_MAC_ADDR 0x11 +#define TXGBE_MAX_MSIX_VECTORS_SAPPHIRE 0x40 + +/* MSI-X capability fields masks */ +#define TXGBE_PCIE_MSIX_TBL_SZ_MASK 0x7FF + +/* Legacy EEPROM word offsets */ +#define TXGBE_ISCSI_BOOT_CAPS 0x0033 +#define TXGBE_ISCSI_SETUP_PORT_0 0x0030 +#define TXGBE_ISCSI_SETUP_PORT_1 0x0034 + +/* EEPROM Commands - SPI */ +#define TXGBE_EEPROM_MAX_RETRY_SPI 5000 /* Max wait 5ms for RDY signal */ +#define TXGBE_EEPROM_STATUS_RDY_SPI 0x01 +#define TXGBE_EEPROM_READ_OPCODE_SPI 0x03 /* EEPROM read opcode */ +#define TXGBE_EEPROM_WRITE_OPCODE_SPI 0x02 /* EEPROM write opcode */ +#define TXGBE_EEPROM_A8_OPCODE_SPI 0x08 /* opcode bit-3 = addr bit-8 */ +#define TXGBE_EEPROM_WREN_OPCODE_SPI 0x06 /* EEPROM set Write Ena latch */ +/* EEPROM reset Write Enable latch */ +#define TXGBE_EEPROM_WRDI_OPCODE_SPI 0x04 +#define TXGBE_EEPROM_RDSR_OPCODE_SPI 0x05 /* EEPROM read Status reg */ +#define TXGBE_EEPROM_WRSR_OPCODE_SPI 0x01 /* EEPROM write Status reg */ +#define TXGBE_EEPROM_ERASE4K_OPCODE_SPI 0x20 /* EEPROM ERASE 4KB */ +#define TXGBE_EEPROM_ERASE64K_OPCODE_SPI 0xD8 /* EEPROM ERASE 64KB */ +#define TXGBE_EEPROM_ERASE256_OPCODE_SPI 0xDB /* EEPROM ERASE 256B */ + +/* EEPROM Read Register */ +#define TXGBE_EEPROM_RW_REG_DATA 16 /* data offset in EEPROM read reg */ +#define TXGBE_EEPROM_RW_REG_DONE 2 /* Offset to READ done bit */ +#define TXGBE_EEPROM_RW_REG_START 1 /* First bit to start operation */ +#define TXGBE_EEPROM_RW_ADDR_SHIFT 2 /* Shift to the address bits */ +#define TXGBE_NVM_POLL_WRITE 1 /* Flag for polling for wr complete */ +#define TXGBE_NVM_POLL_READ 0 /* Flag for polling for rd complete */ + +#define NVM_INIT_CTRL_3 0x38 +#define NVM_INIT_CTRL_3_LPLU 0x8 +#define NVM_INIT_CTRL_3_D10GMP_PORT0 0x40 +#define NVM_INIT_CTRL_3_D10GMP_PORT1 0x100 + +#define TXGBE_ETH_LENGTH_OF_ADDRESS 6 + +#define TXGBE_EEPROM_PAGE_SIZE_MAX 128 +#define TXGBE_EEPROM_RD_BUFFER_MAX_COUNT 256 /* words rd in burst */ +#define TXGBE_EEPROM_WR_BUFFER_MAX_COUNT 256 /* words wr in burst */ +#define TXGBE_EEPROM_CTRL_2 1 /* EEPROM CTRL word 2 */ +#define TXGBE_EEPROM_CCD_BIT 2 + +#ifndef TXGBE_EEPROM_GRANT_ATTEMPTS +#define TXGBE_EEPROM_GRANT_ATTEMPTS 1000 /* EEPROM attempts to gain grant */ +#endif + +#ifndef TXGBE_EERD_EEWR_ATTEMPTS +/* Number of 5 microseconds we wait for EERD read and + * EERW write to complete */ +#define TXGBE_EERD_EEWR_ATTEMPTS 100000 +#endif + +#ifndef TXGBE_FLUDONE_ATTEMPTS +/* # attempts we wait for flush update to complete */ +#define TXGBE_FLUDONE_ATTEMPTS 20000 +#endif + +#define TXGBE_PCIE_CTRL2 0x5 /* PCIe Control 2 Offset */ +#define TXGBE_PCIE_CTRL2_DUMMY_ENABLE 0x8 /* Dummy Function Enable */ +#define TXGBE_PCIE_CTRL2_LAN_DISABLE 0x2 /* LAN PCI Disable */ +#define TXGBE_PCIE_CTRL2_DISABLE_SELECT 0x1 /* LAN Disable Select */ + +#define TXGBE_SAN_MAC_ADDR_PORT0_OFFSET 0x0 +#define TXGBE_SAN_MAC_ADDR_PORT1_OFFSET 0x3 +#define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP 0x1 +#define TXGBE_DEVICE_CAPS_FCOE_OFFLOADS 0x2 +#define TXGBE_FW_LESM_PARAMETERS_PTR 0x2 +#define TXGBE_FW_LESM_STATE_1 0x1 +#define TXGBE_FW_LESM_STATE_ENABLED 0x8000 /* LESM Enable bit */ +#define TXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR 0x4 +#define TXGBE_FW_PATCH_VERSION_4 0x7 +#define TXGBE_FCOE_IBA_CAPS_BLK_PTR 0x33 /* iSCSI/FCOE block */ +#define TXGBE_FCOE_IBA_CAPS_FCOE 0x20 /* FCOE flags */ +#define TXGBE_ISCSI_FCOE_BLK_PTR 0x17 /* iSCSI/FCOE block */ +#define TXGBE_ISCSI_FCOE_FLAGS_OFFSET 0x0 /* FCOE flags */ +#define TXGBE_ISCSI_FCOE_FLAGS_ENABLE 0x1 /* FCOE flags enable bit */ +#define TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR 0x17 /* Alt. SAN MAC block */ +#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET 0x0 /* Alt SAN MAC capability */ +#define TXGBE_ALT_SAN_MAC_ADDR_PORT0_OFFSET 0x1 /* Alt SAN MAC 0 offset */ +#define TXGBE_ALT_SAN_MAC_ADDR_PORT1_OFFSET 0x4 /* Alt SAN MAC 1 offset */ +#define TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET 0x7 /* Alt WWNN prefix offset */ +#define TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET 0x8 /* Alt WWPN prefix offset */ +#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_SANMAC 0x0 /* Alt SAN MAC exists */ +#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN 0x1 /* Alt WWN base exists */ +#define TXGBE_DEVICE_CAPS_WOL_PORT0_1 0x4 /* WoL supported on ports 0 & 1 */ +#define TXGBE_DEVICE_CAPS_WOL_PORT0 0x8 /* WoL supported on port 0 */ +#define TXGBE_DEVICE_CAPS_WOL_MASK 0xC /* Mask for WoL capabilities */ + +/******************************** PCI Bus Info *******************************/ +#define TXGBE_PCI_DEVICE_STATUS 0xAA +#define TXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING 0x0020 +#define TXGBE_PCI_LINK_STATUS 0xB2 +#define TXGBE_PCI_DEVICE_CONTROL2 0xC8 +#define TXGBE_PCI_LINK_WIDTH 0x3F0 +#define TXGBE_PCI_LINK_WIDTH_1 0x10 +#define TXGBE_PCI_LINK_WIDTH_2 0x20 +#define TXGBE_PCI_LINK_WIDTH_4 0x40 +#define TXGBE_PCI_LINK_WIDTH_8 0x80 +#define TXGBE_PCI_LINK_SPEED 0xF +#define TXGBE_PCI_LINK_SPEED_2500 0x1 +#define TXGBE_PCI_LINK_SPEED_5000 0x2 +#define TXGBE_PCI_LINK_SPEED_8000 0x3 +#define TXGBE_PCI_HEADER_TYPE_REGISTER 0x0E +#define TXGBE_PCI_HEADER_TYPE_MULTIFUNC 0x80 +#define TXGBE_PCI_DEVICE_CONTROL2_16ms 0x0005 + +#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET 4 +#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_MASK \ + (0x0001 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET) +#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_ENABLE \ + (0x01 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET) + +#define TXGBE_PCIDEVCTRL2_TIMEO_MASK 0xf +#define TXGBE_PCIDEVCTRL2_16_32ms_def 0x0 +#define TXGBE_PCIDEVCTRL2_50_100us 0x1 +#define TXGBE_PCIDEVCTRL2_1_2ms 0x2 +#define TXGBE_PCIDEVCTRL2_16_32ms 0x5 +#define TXGBE_PCIDEVCTRL2_65_130ms 0x6 +#define TXGBE_PCIDEVCTRL2_260_520ms 0x9 +#define TXGBE_PCIDEVCTRL2_1_2s 0xa +#define TXGBE_PCIDEVCTRL2_4_8s 0xd +#define TXGBE_PCIDEVCTRL2_17_34s 0xe + + +/******************* Receive Descriptor bit definitions **********************/ +#define TXGBE_RXD_IPSEC_STATUS_SECP 0x00020000U +#define TXGBE_RXD_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000U +#define TXGBE_RXD_IPSEC_ERROR_INVALID_LENGTH 0x10000000U +#define TXGBE_RXD_IPSEC_ERROR_AUTH_FAILED 0x18000000U +#define TXGBE_RXD_IPSEC_ERROR_BIT_MASK 0x18000000U + +#define TXGBE_RXD_NEXTP_MASK 0x000FFFF0U /* Next Descriptor Index */ +#define TXGBE_RXD_NEXTP_SHIFT 0x00000004U +#define TXGBE_RXD_STAT_MASK 0x000fffffU /* Stat/NEXTP: bit 0-19 */ +#define TXGBE_RXD_STAT_DD 0x00000001U /* Done */ +#define TXGBE_RXD_STAT_EOP 0x00000002U /* End of Packet */ +#define TXGBE_RXD_STAT_CLASS_ID_MASK 0x0000001CU +#define TXGBE_RXD_STAT_CLASS_ID_TC_RSS 0x00000000U +#define TXGBE_RXD_STAT_CLASS_ID_FLM 0x00000004U /* FDir Match */ +#define TXGBE_RXD_STAT_CLASS_ID_SYN 0x00000008U +#define TXGBE_RXD_STAT_CLASS_ID_5_TUPLE 0x0000000CU +#define TXGBE_RXD_STAT_CLASS_ID_L2_ETYPE 0x00000010U +#define TXGBE_RXD_STAT_VP 0x00000020U /* IEEE VLAN Pkt */ +#define TXGBE_RXD_STAT_UDPCS 0x00000040U /* UDP xsum calculated */ +#define TXGBE_RXD_STAT_L4CS 0x00000080U /* L4 xsum calculated */ +#define TXGBE_RXD_STAT_IPCS 0x00000100U /* IP xsum calculated */ +#define TXGBE_RXD_STAT_PIF 0x00000200U /* passed in-exact filter */ +#define TXGBE_RXD_STAT_OUTERIPCS 0x00000400U /* Cloud IP xsum calculated*/ +#define TXGBE_RXD_STAT_VEXT 0x00000800U /* 1st VLAN found */ +#define TXGBE_RXD_STAT_LLINT 0x00002000U /* Pkt caused Low Latency + * Int */ +#define TXGBE_RXD_STAT_TS 0x00004000U /* IEEE1588 Time Stamp */ +#define TXGBE_RXD_STAT_SECP 0x00008000U /* Security Processing */ +#define TXGBE_RXD_STAT_LB 0x00010000U /* Loopback Status */ +#define TXGBE_RXD_STAT_FCEOFS 0x00020000U /* FCoE EOF/SOF Stat */ +#define TXGBE_RXD_STAT_FCSTAT 0x000C0000U /* FCoE Pkt Stat */ +#define TXGBE_RXD_STAT_FCSTAT_NOMTCH 0x00000000U /* 00: No Ctxt Match */ +#define TXGBE_RXD_STAT_FCSTAT_NODDP 0x00040000U /* 01: Ctxt w/o DDP */ +#define TXGBE_RXD_STAT_FCSTAT_FCPRSP 0x00080000U /* 10: Recv. FCP_RSP */ +#define TXGBE_RXD_STAT_FCSTAT_DDP 0x000C0000U /* 11: Ctxt w/ DDP */ + +#define TXGBE_RXD_ERR_MASK 0xfff00000U /* RDESC.ERRORS mask */ +#define TXGBE_RXD_ERR_SHIFT 20 /* RDESC.ERRORS shift */ +#define TXGBE_RXD_ERR_FCEOFE 0x80000000U /* FCEOFe/IPE */ +#define TXGBE_RXD_ERR_FCERR 0x00700000U /* FCERR/FDIRERR */ +#define TXGBE_RXD_ERR_FDIR_LEN 0x00100000U /* FDIR Length error */ +#define TXGBE_RXD_ERR_FDIR_DROP 0x00200000U /* FDIR Drop error */ +#define TXGBE_RXD_ERR_FDIR_COLL 0x00400000U /* FDIR Collision error */ +#define TXGBE_RXD_ERR_HBO 0x00800000U /*Header Buffer Overflow */ +#define TXGBE_RXD_ERR_OUTERIPER 0x04000000U /* CRC IP Header error */ +#define TXGBE_RXD_ERR_SECERR_MASK 0x18000000U +#define TXGBE_RXD_ERR_RXE 0x20000000U /* Any MAC Error */ +#define TXGBE_RXD_ERR_TCPE 0x40000000U /* TCP/UDP Checksum Error */ +#define TXGBE_RXD_ERR_IPE 0x80000000U /* IP Checksum Error */ + +#define TXGBE_RXDPS_HDRSTAT_HDRSP 0x00008000U +#define TXGBE_RXDPS_HDRSTAT_HDRLEN_MASK 0x000003FFU + +#define TXGBE_RXD_RSSTYPE_MASK 0x0000000FU +#define TXGBE_RXD_TPID_MASK 0x000001C0U +#define TXGBE_RXD_TPID_SHIFT 6 +#define TXGBE_RXD_HDRBUFLEN_MASK 0x00007FE0U +#define TXGBE_RXD_RSCCNT_MASK 0x001E0000U +#define TXGBE_RXD_RSCCNT_SHIFT 17 +#define TXGBE_RXD_HDRBUFLEN_SHIFT 5 +#define TXGBE_RXD_SPLITHEADER_EN 0x00001000U +#define TXGBE_RXD_SPH 0x8000 + +/* RSS Hash results */ +#define TXGBE_RXD_RSSTYPE_NONE 0x00000000U +#define TXGBE_RXD_RSSTYPE_IPV4_TCP 0x00000001U +#define TXGBE_RXD_RSSTYPE_IPV4 0x00000002U +#define TXGBE_RXD_RSSTYPE_IPV6_TCP 0x00000003U +#define TXGBE_RXD_RSSTYPE_IPV4_SCTP 0x00000004U +#define TXGBE_RXD_RSSTYPE_IPV6 0x00000005U +#define TXGBE_RXD_RSSTYPE_IPV6_SCTP 0x00000006U +#define TXGBE_RXD_RSSTYPE_IPV4_UDP 0x00000007U +#define TXGBE_RXD_RSSTYPE_IPV6_UDP 0x00000008U + +/** + * receive packet type + * PTYPE:8 = TUN:2 + PKT:2 + TYP:4 + **/ +/* TUN */ +#define TXGBE_PTYPE_TUN_IPV4 (0x80) +#define TXGBE_PTYPE_TUN_IPV6 (0xC0) + +/* PKT for TUN */ +#define TXGBE_PTYPE_PKT_IPIP (0x00) /* IP+IP */ +#define TXGBE_PTYPE_PKT_IG (0x10) /* IP+GRE */ +#define TXGBE_PTYPE_PKT_IGM (0x20) /* IP+GRE+MAC */ +#define TXGBE_PTYPE_PKT_IGMV (0x30) /* IP+GRE+MAC+VLAN */ +/* PKT for !TUN */ +#define TXGBE_PTYPE_PKT_MAC (0x10) +#define TXGBE_PTYPE_PKT_IP (0x20) +#define TXGBE_PTYPE_PKT_FCOE (0x30) + +/* TYP for PKT=mac */ +#define TXGBE_PTYPE_TYP_MAC (0x01) +#define TXGBE_PTYPE_TYP_TS (0x02) /* time sync */ +#define TXGBE_PTYPE_TYP_FIP (0x03) +#define TXGBE_PTYPE_TYP_LLDP (0x04) +#define TXGBE_PTYPE_TYP_CNM (0x05) +#define TXGBE_PTYPE_TYP_EAPOL (0x06) +#define TXGBE_PTYPE_TYP_ARP (0x07) +/* TYP for PKT=ip */ +#define TXGBE_PTYPE_PKT_IPV6 (0x08) +#define TXGBE_PTYPE_TYP_IPFRAG (0x01) +#define TXGBE_PTYPE_TYP_IP (0x02) +#define TXGBE_PTYPE_TYP_UDP (0x03) +#define TXGBE_PTYPE_TYP_TCP (0x04) +#define TXGBE_PTYPE_TYP_SCTP (0x05) +/* TYP for PKT=fcoe */ +#define TXGBE_PTYPE_PKT_VFT (0x08) +#define TXGBE_PTYPE_TYP_FCOE (0x00) +#define TXGBE_PTYPE_TYP_FCDATA (0x01) +#define TXGBE_PTYPE_TYP_FCRDY (0x02) +#define TXGBE_PTYPE_TYP_FCRSP (0x03) +#define TXGBE_PTYPE_TYP_FCOTHER (0x04) + +/* Packet type non-ip values */ +enum txgbe_l2_ptypes { + TXGBE_PTYPE_L2_ABORTED = (TXGBE_PTYPE_PKT_MAC), + TXGBE_PTYPE_L2_MAC = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_MAC), + TXGBE_PTYPE_L2_TS = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_TS), + TXGBE_PTYPE_L2_FIP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_FIP), + TXGBE_PTYPE_L2_LLDP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_LLDP), + TXGBE_PTYPE_L2_CNM = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_CNM), + TXGBE_PTYPE_L2_EAPOL = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_EAPOL), + TXGBE_PTYPE_L2_ARP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_ARP), + + TXGBE_PTYPE_L2_IPV4_FRAG = (TXGBE_PTYPE_PKT_IP | + TXGBE_PTYPE_TYP_IPFRAG), + TXGBE_PTYPE_L2_IPV4 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_IP), + TXGBE_PTYPE_L2_IPV4_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_UDP), + TXGBE_PTYPE_L2_IPV4_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_TCP), + TXGBE_PTYPE_L2_IPV4_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_SCTP), + TXGBE_PTYPE_L2_IPV6_FRAG = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 | + TXGBE_PTYPE_TYP_IPFRAG), + TXGBE_PTYPE_L2_IPV6 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 | + TXGBE_PTYPE_TYP_IP), + TXGBE_PTYPE_L2_IPV6_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 | + TXGBE_PTYPE_TYP_UDP), + TXGBE_PTYPE_L2_IPV6_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 | + TXGBE_PTYPE_TYP_TCP), + TXGBE_PTYPE_L2_IPV6_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 | + TXGBE_PTYPE_TYP_SCTP), + + TXGBE_PTYPE_L2_FCOE = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_TYP_FCOE), + TXGBE_PTYPE_L2_FCOE_FCDATA = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_TYP_FCDATA), + TXGBE_PTYPE_L2_FCOE_FCRDY = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_TYP_FCRDY), + TXGBE_PTYPE_L2_FCOE_FCRSP = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_TYP_FCRSP), + TXGBE_PTYPE_L2_FCOE_FCOTHER = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_TYP_FCOTHER), + TXGBE_PTYPE_L2_FCOE_VFT = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_PKT_VFT), + TXGBE_PTYPE_L2_FCOE_VFT_FCDATA = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCDATA), + TXGBE_PTYPE_L2_FCOE_VFT_FCRDY = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRDY), + TXGBE_PTYPE_L2_FCOE_VFT_FCRSP = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRSP), + TXGBE_PTYPE_L2_FCOE_VFT_FCOTHER = (TXGBE_PTYPE_PKT_FCOE | + TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCOTHER), + + TXGBE_PTYPE_L2_TUN4_MAC = (TXGBE_PTYPE_TUN_IPV4 | TXGBE_PTYPE_PKT_IGM), + TXGBE_PTYPE_L2_TUN6_MAC = (TXGBE_PTYPE_TUN_IPV6 | TXGBE_PTYPE_PKT_IGM), +}; + +#define TXGBE_RXD_PKTTYPE(_rxd) \ + ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 9) & 0xFF) +#define TXGBE_PTYPE_TUN(_pt) ((_pt) & 0xC0) +#define TXGBE_PTYPE_PKT(_pt) ((_pt) & 0x30) +#define TXGBE_PTYPE_TYP(_pt) ((_pt) & 0x0F) +#define TXGBE_PTYPE_TYPL4(_pt) ((_pt) & 0x07) + +#define TXGBE_RXD_IPV6EX(_rxd) \ + ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 6) & 0x1) + +/* Security Processing bit Indication */ +#define TXGBE_RXD_LNKSEC_STATUS_SECP 0x00020000U +#define TXGBE_RXD_LNKSEC_ERROR_NO_SA_MATCH 0x08000000U +#define TXGBE_RXD_LNKSEC_ERROR_REPLAY_ERROR 0x10000000U +#define TXGBE_RXD_LNKSEC_ERROR_BIT_MASK 0x18000000U +#define TXGBE_RXD_LNKSEC_ERROR_BAD_SIG 0x18000000U + +/* Masks to determine if packets should be dropped due to frame errors */ +#define TXGBE_RXD_ERR_FRAME_ERR_MASK TXGBE_RXD_ERR_RXE + +/*********************** Adv Transmit Descriptor Config Masks ****************/ +#define TXGBE_TXD_DTALEN_MASK 0x0000FFFFU /* Data buf length(bytes) */ +#define TXGBE_TXD_MAC_LINKSEC 0x00040000U /* Insert LinkSec */ +#define TXGBE_TXD_MAC_TSTAMP 0x00080000U /* IEEE1588 time stamp */ +#define TXGBE_TXD_IPSEC_SA_INDEX_MASK 0x000003FFU /* IPSec SA index */ +#define TXGBE_TXD_IPSEC_ESP_LEN_MASK 0x000001FFU /* IPSec ESP length */ +#define TXGBE_TXD_DTYP_MASK 0x00F00000U /* DTYP mask */ +#define TXGBE_TXD_DTYP_CTXT 0x00100000U /* Adv Context Desc */ +#define TXGBE_TXD_DTYP_DATA 0x00000000U /* Adv Data Descriptor */ +#define TXGBE_TXD_EOP 0x01000000U /* End of Packet */ +#define TXGBE_TXD_IFCS 0x02000000U /* Insert FCS */ +#define TXGBE_TXD_LINKSEC 0x04000000U /* enable linksec */ +#define TXGBE_TXD_RS 0x08000000U /* Report Status */ +#define TXGBE_TXD_ECU 0x10000000U /* DDP hdr type or iSCSI */ +#define TXGBE_TXD_QCN 0x20000000U /* cntag insertion enable */ +#define TXGBE_TXD_VLE 0x40000000U /* VLAN pkt enable */ +#define TXGBE_TXD_TSE 0x80000000U /* TCP Seg enable */ +#define TXGBE_TXD_STAT_DD 0x00000001U /* Descriptor Done */ +#define TXGBE_TXD_IDX_SHIFT 4 /* Adv desc Index shift */ +#define TXGBE_TXD_CC 0x00000080U /* Check Context */ +#define TXGBE_TXD_IPSEC 0x00000100U /* enable ipsec esp */ +#define TXGBE_TXD_IIPCS 0x00000400U +#define TXGBE_TXD_EIPCS 0x00000800U +#define TXGBE_TXD_L4CS 0x00000200U +#define TXGBE_TXD_PAYLEN_SHIFT 13 /* Adv desc PAYLEN shift */ +#define TXGBE_TXD_MACLEN_SHIFT 9 /* Adv ctxt desc mac len shift */ +#define TXGBE_TXD_VLAN_SHIFT 16 /* Adv ctxt vlan tag shift */ +#define TXGBE_TXD_TAG_TPID_SEL_SHIFT 11 +#define TXGBE_TXD_IPSEC_TYPE_SHIFT 14 +#define TXGBE_TXD_ENC_SHIFT 15 + +#define TXGBE_TXD_TUCMD_IPSEC_TYPE_ESP 0x00004000U /* IPSec Type ESP */ +#define TXGBE_TXD_TUCMD_IPSEC_ENCRYPT_EN 0x00008000/* ESP Encrypt Enable */ +#define TXGBE_TXD_TUCMD_FCOE 0x00010000U /* FCoE Frame Type */ +#define TXGBE_TXD_FCOEF_EOF_MASK (0x3 << 10) /* FC EOF index */ +#define TXGBE_TXD_FCOEF_SOF ((1 << 2) << 10) /* FC SOF index */ +#define TXGBE_TXD_FCOEF_PARINC ((1 << 3) << 10) /* Rel_Off in F_CTL */ +#define TXGBE_TXD_FCOEF_ORIE ((1 << 4) << 10) /* Orientation End */ +#define TXGBE_TXD_FCOEF_ORIS ((1 << 5) << 10) /* Orientation Start */ +#define TXGBE_TXD_FCOEF_EOF_N (0x0 << 10) /* 00: EOFn */ +#define TXGBE_TXD_FCOEF_EOF_T (0x1 << 10) /* 01: EOFt */ +#define TXGBE_TXD_FCOEF_EOF_NI (0x2 << 10) /* 10: EOFni */ +#define TXGBE_TXD_FCOEF_EOF_A (0x3 << 10) /* 11: EOFa */ +#define TXGBE_TXD_L4LEN_SHIFT 8 /* Adv ctxt L4LEN shift */ +#define TXGBE_TXD_MSS_SHIFT 16 /* Adv ctxt MSS shift */ + +#define TXGBE_TXD_OUTER_IPLEN_SHIFT 12 /* Adv ctxt OUTERIPLEN shift */ +#define TXGBE_TXD_TUNNEL_LEN_SHIFT 21 /* Adv ctxt TUNNELLEN shift */ +#define TXGBE_TXD_TUNNEL_TYPE_SHIFT 11 /* Adv Tx Desc Tunnel Type shift */ +#define TXGBE_TXD_TUNNEL_DECTTL_SHIFT 27 /* Adv ctxt DECTTL shift */ +#define TXGBE_TXD_TUNNEL_UDP (0x0ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT) +#define TXGBE_TXD_TUNNEL_GRE (0x1ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT) + +/************ txgbe_type.h ************/ +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE 8 +#define TXGBE_REQ_TX_BUFFER_GRANULARITY 1024 + +/* Vlan-specific macros */ +#define TXGBE_RX_DESC_SPECIAL_VLAN_MASK 0x0FFF /* VLAN ID in lower 12 bits */ +#define TXGBE_RX_DESC_SPECIAL_PRI_MASK 0xE000 /* Priority in upper 3 bits */ +#define TXGBE_RX_DESC_SPECIAL_PRI_SHIFT 0x000D /* Priority in upper 3 of 16 */ +#define TXGBE_TX_DESC_SPECIAL_PRI_SHIFT TXGBE_RX_DESC_SPECIAL_PRI_SHIFT + +/* Transmit Descriptor */ +union txgbe_tx_desc { + struct { + __le64 buffer_addr; /* Address of descriptor's data buf */ + __le32 cmd_type_len; + __le32 olinfo_status; + } read; + struct { + __le64 rsvd; /* Reserved */ + __le32 nxtseq_seed; + __le32 status; + } wb; +}; + +/* Receive Descriptor */ +union txgbe_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + } read; + struct { + struct { + union { + __le32 data; + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen */ + } hs_rss; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + struct { + __le16 ip_id; /* IP id */ + __le16 csum; /* Packet Checksum */ + } csum_ip; + } hi_dword; + } lower; + struct { + __le32 status_error; /* ext status/error */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + } upper; + } wb; /* writeback */ +}; + +/* Context descriptors */ +struct txgbe_tx_context_desc { + __le32 vlan_macip_lens; + __le32 seqnum_seed; + __le32 type_tucmd_mlhl; + __le32 mss_l4len_idx; +}; + +/************************* Flow Directory HASH *******************************/ +/* Software ATR hash keys */ +#define TXGBE_ATR_BUCKET_HASH_KEY 0x3DAD14E2 +#define TXGBE_ATR_SIGNATURE_HASH_KEY 0x174D3614 + +/* Software ATR input stream values and masks */ +#define TXGBE_ATR_HASH_MASK 0x7fff +#define TXGBE_ATR_L4TYPE_MASK 0x3 +#define TXGBE_ATR_L4TYPE_UDP 0x1 +#define TXGBE_ATR_L4TYPE_TCP 0x2 +#define TXGBE_ATR_L4TYPE_SCTP 0x3 +#define TXGBE_ATR_L4TYPE_IPV6_MASK 0x4 +#define TXGBE_ATR_L4TYPE_TUNNEL_MASK 0x10 +enum txgbe_atr_flow_type { + TXGBE_ATR_FLOW_TYPE_IPV4 = 0x0, + TXGBE_ATR_FLOW_TYPE_UDPV4 = 0x1, + TXGBE_ATR_FLOW_TYPE_TCPV4 = 0x2, + TXGBE_ATR_FLOW_TYPE_SCTPV4 = 0x3, + TXGBE_ATR_FLOW_TYPE_IPV6 = 0x4, + TXGBE_ATR_FLOW_TYPE_UDPV6 = 0x5, + TXGBE_ATR_FLOW_TYPE_TCPV6 = 0x6, + TXGBE_ATR_FLOW_TYPE_SCTPV6 = 0x7, + TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4 = 0x10, + TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4 = 0x11, + TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4 = 0x12, + TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4 = 0x13, + TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6 = 0x14, + TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6 = 0x15, + TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6 = 0x16, + TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6 = 0x17, +}; + +/* Flow Director ATR input struct. */ +union txgbe_atr_input { + /* + * Byte layout in order, all values with MSB first: + * + * vm_pool - 1 byte + * flow_type - 1 byte + * vlan_id - 2 bytes + * src_ip - 16 bytes + * inner_mac - 6 bytes + * cloud_mode - 2 bytes + * tni_vni - 4 bytes + * dst_ip - 16 bytes + * src_port - 2 bytes + * dst_port - 2 bytes + * flex_bytes - 2 bytes + * bkt_hash - 2 bytes + */ + struct { + u8 vm_pool; + u8 flow_type; + __be16 vlan_id; + __be32 dst_ip[4]; + __be32 src_ip[4]; + __be16 src_port; + __be16 dst_port; + __be16 flex_bytes; + __be16 bkt_hash; + } formatted; + __be32 dword_stream[11]; +}; + +/* Flow Director compressed ATR hash input struct */ +union txgbe_atr_hash_dword { + struct { + u8 vm_pool; + u8 flow_type; + __be16 vlan_id; + } formatted; + __be32 ip; + struct { + __be16 src; + __be16 dst; + } port; + __be16 flex_bytes; + __be32 dword; +}; + + +/****************** Manageablility Host Interface defines ********************/ +#define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH 256 /* Num of bytes in range */ +#define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */ +#define TXGBE_HI_COMMAND_TIMEOUT 5000 /* Process HI command limit */ +#define TXGBE_HI_FLASH_ERASE_TIMEOUT 5000 /* Process Erase command limit */ +#define TXGBE_HI_FLASH_UPDATE_TIMEOUT 5000 /* Process Update command limit */ +#define TXGBE_HI_FLASH_VERIFY_TIMEOUT 60000 /* Process Apply command limit */ +#define TXGBE_HI_PHY_MGMT_REQ_TIMEOUT 2000 /* Wait up to 2 seconds */ + +/* CEM Support */ +#define FW_CEM_HDR_LEN 0x4 +#define FW_CEM_CMD_DRIVER_INFO 0xDD +#define FW_CEM_CMD_DRIVER_INFO_LEN 0x5 +#define FW_CEM_CMD_RESERVED 0X0 +#define FW_CEM_UNUSED_VER 0x0 +#define FW_CEM_MAX_RETRIES 3 +#define FW_CEM_RESP_STATUS_SUCCESS 0x1 +#define FW_READ_SHADOW_RAM_CMD 0x31 +#define FW_READ_SHADOW_RAM_LEN 0x6 +#define FW_WRITE_SHADOW_RAM_CMD 0x33 +#define FW_WRITE_SHADOW_RAM_LEN 0xA /* 8 plus 1 WORD to write */ +#define FW_SHADOW_RAM_DUMP_CMD 0x36 +#define FW_SHADOW_RAM_DUMP_LEN 0 +#define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */ +#define FW_NVM_DATA_OFFSET 3 +#define FW_MAX_READ_BUFFER_SIZE 244 +#define FW_DISABLE_RXEN_CMD 0xDE +#define FW_DISABLE_RXEN_LEN 0x1 +#define FW_PHY_MGMT_REQ_CMD 0x20 +#define FW_RESET_CMD 0xDF +#define FW_RESET_LEN 0x2 +#define FW_SETUP_MAC_LINK_CMD 0xE0 +#define FW_SETUP_MAC_LINK_LEN 0x2 +#define FW_FLASH_UPGRADE_START_CMD 0xE3 +#define FW_FLASH_UPGRADE_START_LEN 0x1 +#define FW_FLASH_UPGRADE_WRITE_CMD 0xE4 +#define FW_FLASH_UPGRADE_VERIFY_CMD 0xE5 +#define FW_FLASH_UPGRADE_VERIFY_LEN 0x4 + +/* Host Interface Command Structures */ +struct txgbe_hic_hdr { + u8 cmd; + u8 buf_len; + union { + u8 cmd_resv; + u8 ret_status; + } cmd_or_resp; + u8 checksum; +}; + +struct txgbe_hic_hdr2_req { + u8 cmd; + u8 buf_lenh; + u8 buf_lenl; + u8 checksum; +}; + +struct txgbe_hic_hdr2_rsp { + u8 cmd; + u8 buf_lenl; + u8 buf_lenh_status; /* 7-5: high bits of buf_len, 4-0: status */ + u8 checksum; +}; + +union txgbe_hic_hdr2 { + struct txgbe_hic_hdr2_req req; + struct txgbe_hic_hdr2_rsp rsp; +}; + +struct txgbe_hic_drv_info { + struct txgbe_hic_hdr hdr; + u8 port_num; + u8 ver_sub; + u8 ver_build; + u8 ver_min; + u8 ver_maj; + u8 pad; /* end spacing to ensure length is mult. of dword */ + u16 pad2; /* end spacing to ensure length is mult. of dword2 */ +}; + +/* These need to be dword aligned */ +struct txgbe_hic_read_shadow_ram { + union txgbe_hic_hdr2 hdr; + u32 address; + u16 length; + u16 pad2; + u16 data; + u16 pad3; +}; + +struct txgbe_hic_write_shadow_ram { + union txgbe_hic_hdr2 hdr; + u32 address; + u16 length; + u16 pad2; + u16 data; + u16 pad3; +}; + +struct txgbe_hic_disable_rxen { + struct txgbe_hic_hdr hdr; + u8 port_number; + u8 pad2; + u16 pad3; +}; + +struct txgbe_hic_reset { + struct txgbe_hic_hdr hdr; + u16 lan_id; + u16 reset_type; +}; + +struct txgbe_hic_phy_cfg { + struct txgbe_hic_hdr hdr; + u8 lan_id; + u8 phy_mode; + u16 phy_speed; +}; + +enum txgbe_module_id { + TXGBE_MODULE_EEPROM = 0, + TXGBE_MODULE_FIRMWARE, + TXGBE_MODULE_HARDWARE, + TXGBE_MODULE_PCIE +}; + +struct txgbe_hic_upg_start { + struct txgbe_hic_hdr hdr; + u8 module_id; + u8 pad2; + u16 pad3; +}; + +struct txgbe_hic_upg_write { + struct txgbe_hic_hdr hdr; + u8 data_len; + u8 eof_flag; + u16 check_sum; + u32 data[62]; +}; + +enum txgbe_upg_flag { + TXGBE_RESET_NONE = 0, + TXGBE_RESET_FIRMWARE, + TXGBE_RELOAD_EEPROM, + TXGBE_RESET_LAN +}; + +struct txgbe_hic_upg_verify { + struct txgbe_hic_hdr hdr; + u32 action_flag; +}; + +/* Number of 100 microseconds we wait for PCI Express master disable */ +#define TXGBE_PCI_MASTER_DISABLE_TIMEOUT 800 + +/* Check whether address is multicast. This is little-endian specific check.*/ +#define TXGBE_IS_MULTICAST(Address) \ + (bool)(((u8 *)(Address))[0] & ((u8)0x01)) + +/* Check whether an address is broadcast. */ +#define TXGBE_IS_BROADCAST(Address) \ + ((((u8 *)(Address))[0] == ((u8)0xff)) && \ + (((u8 *)(Address))[1] == ((u8)0xff))) + +/* DCB registers */ +#define TXGBE_DCB_MAX_TRAFFIC_CLASS 8 + +/* Power Management */ +/* DMA Coalescing configuration */ +struct txgbe_dmac_config { + u16 watchdog_timer; /* usec units */ + bool fcoe_en; + u32 link_speed; + u8 fcoe_tc; + u8 num_tcs; +}; + + +/* Autonegotiation advertised speeds */ +typedef u32 txgbe_autoneg_advertised; +/* Link speed */ +#define TXGBE_LINK_SPEED_UNKNOWN 0 +#define TXGBE_LINK_SPEED_100_FULL 1 +#define TXGBE_LINK_SPEED_1GB_FULL 2 +#define TXGBE_LINK_SPEED_10GB_FULL 4 +#define TXGBE_LINK_SPEED_10_FULL 8 +#define TXGBE_LINK_SPEED_AUTONEG (TXGBE_LINK_SPEED_100_FULL | \ + TXGBE_LINK_SPEED_1GB_FULL | \ + TXGBE_LINK_SPEED_10GB_FULL | \ + TXGBE_LINK_SPEED_10_FULL) + +/* Physical layer type */ +typedef u32 txgbe_physical_layer; +#define TXGBE_PHYSICAL_LAYER_UNKNOWN 0 +#define TXGBE_PHYSICAL_LAYER_10GBASE_T 0x0001 +#define TXGBE_PHYSICAL_LAYER_1000BASE_T 0x0002 +#define TXGBE_PHYSICAL_LAYER_100BASE_TX 0x0004 +#define TXGBE_PHYSICAL_LAYER_SFP_PLUS_CU 0x0008 +#define TXGBE_PHYSICAL_LAYER_10GBASE_LR 0x0010 +#define TXGBE_PHYSICAL_LAYER_10GBASE_LRM 0x0020 +#define TXGBE_PHYSICAL_LAYER_10GBASE_SR 0x0040 +#define TXGBE_PHYSICAL_LAYER_10GBASE_KX4 0x0080 +#define TXGBE_PHYSICAL_LAYER_1000BASE_KX 0x0200 +#define TXGBE_PHYSICAL_LAYER_1000BASE_BX 0x0400 +#define TXGBE_PHYSICAL_LAYER_10GBASE_KR 0x0800 +#define TXGBE_PHYSICAL_LAYER_10GBASE_XAUI 0x1000 +#define TXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA 0x2000 +#define TXGBE_PHYSICAL_LAYER_1000BASE_SX 0x4000 + + +/* Special PHY Init Routine */ +#define TXGBE_PHY_INIT_OFFSET_NL 0x002B +#define TXGBE_PHY_INIT_END_NL 0xFFFF +#define TXGBE_CONTROL_MASK_NL 0xF000 +#define TXGBE_DATA_MASK_NL 0x0FFF +#define TXGBE_CONTROL_SHIFT_NL 12 +#define TXGBE_DELAY_NL 0 +#define TXGBE_DATA_NL 1 +#define TXGBE_CONTROL_NL 0x000F +#define TXGBE_CONTROL_EOL_NL 0x0FFF +#define TXGBE_CONTROL_SOL_NL 0x0000 + +/* Flow Control Data Sheet defined values + * Calculation and defines taken from 802.1bb Annex O + */ + +/* BitTimes (BT) conversion */ +#define TXGBE_BT2KB(BT) ((BT + (8 * 1024 - 1)) / (8 * 1024)) +#define TXGBE_B2BT(BT) (BT * 8) + +/* Calculate Delay to respond to PFC */ +#define TXGBE_PFC_D 672 + +/* Calculate Cable Delay */ +#define TXGBE_CABLE_DC 5556 /* Delay Copper */ +#define TXGBE_CABLE_DO 5000 /* Delay Optical */ + +/* Calculate Interface Delay X540 */ +#define TXGBE_PHY_DC 25600 /* Delay 10G BASET */ +#define TXGBE_MAC_DC 8192 /* Delay Copper XAUI interface */ +#define TXGBE_XAUI_DC (2 * 2048) /* Delay Copper Phy */ + +#define TXGBE_ID_X540 (TXGBE_MAC_DC + TXGBE_XAUI_DC + TXGBE_PHY_DC) + +/* Calculate Interface Delay */ +#define TXGBE_PHY_D 12800 +#define TXGBE_MAC_D 4096 +#define TXGBE_XAUI_D (2 * 1024) + +#define TXGBE_ID (TXGBE_MAC_D + TXGBE_XAUI_D + TXGBE_PHY_D) + +/* Calculate Delay incurred from higher layer */ +#define TXGBE_HD 6144 + +/* Calculate PCI Bus delay for low thresholds */ +#define TXGBE_PCI_DELAY 10000 + +/* Calculate X540 delay value in bit times */ +#define TXGBE_DV_X540(_max_frame_link, _max_frame_tc) \ + ((36 * \ + (TXGBE_B2BT(_max_frame_link) + \ + TXGBE_PFC_D + \ + (2 * TXGBE_CABLE_DC) + \ + (2 * TXGBE_ID_X540) + \ + TXGBE_HD) / 25 + 1) + \ + 2 * TXGBE_B2BT(_max_frame_tc)) + + +/* Calculate delay value in bit times */ +#define TXGBE_DV(_max_frame_link, _max_frame_tc) \ + ((36 * \ + (TXGBE_B2BT(_max_frame_link) + \ + TXGBE_PFC_D + \ + (2 * TXGBE_CABLE_DC) + \ + (2 * TXGBE_ID) + \ + TXGBE_HD) / 25 + 1) + \ + 2 * TXGBE_B2BT(_max_frame_tc)) + +/* Calculate low threshold delay values */ +#define TXGBE_LOW_DV_X540(_max_frame_tc) \ + (2 * TXGBE_B2BT(_max_frame_tc) + \ + (36 * TXGBE_PCI_DELAY / 25) + 1) + +#define TXGBE_LOW_DV(_max_frame_tc) \ + (2 * TXGBE_LOW_DV_X540(_max_frame_tc)) + + +/* + * Unavailable: The FCoE Boot Option ROM is not present in the flash. + * Disabled: Present; boot order is not set for any targets on the port. + * Enabled: Present; boot order is set for at least one target on the port. + */ +enum txgbe_fcoe_boot_status { + txgbe_fcoe_bootstatus_disabled = 0, + txgbe_fcoe_bootstatus_enabled = 1, + txgbe_fcoe_bootstatus_unavailable = 0xFFFF +}; + +enum txgbe_eeprom_type { + txgbe_eeprom_uninitialized = 0, + txgbe_eeprom_spi, + txgbe_flash, + txgbe_eeprom_none /* No NVM support */ +}; + +enum txgbe_phy_type { + txgbe_phy_unknown = 0, + txgbe_phy_none, + txgbe_phy_tn, + txgbe_phy_aq, + txgbe_phy_cu_unknown, + txgbe_phy_qt, + txgbe_phy_xaui, + txgbe_phy_nl, + txgbe_phy_sfp_passive_tyco, + txgbe_phy_sfp_passive_unknown, + txgbe_phy_sfp_active_unknown, + txgbe_phy_sfp_avago, + txgbe_phy_sfp_ftl, + txgbe_phy_sfp_ftl_active, + txgbe_phy_sfp_unknown, + txgbe_phy_sfp_intel, + txgbe_phy_sfp_unsupported, /*Enforce bit set with unsupported module*/ + txgbe_phy_generic +}; + +/* + * SFP+ module type IDs: + * + * ID Module Type + * ============= + * 0 SFP_DA_CU + * 1 SFP_SR + * 2 SFP_LR + * 3 SFP_DA_CU_CORE0 + * 4 SFP_DA_CU_CORE1 + * 5 SFP_SR/LR_CORE0 + * 6 SFP_SR/LR_CORE1 + */ +enum txgbe_sfp_type { + txgbe_sfp_type_da_cu = 0, + txgbe_sfp_type_sr = 1, + txgbe_sfp_type_lr = 2, + txgbe_sfp_type_da_cu_core0 = 3, + txgbe_sfp_type_da_cu_core1 = 4, + txgbe_sfp_type_srlr_core0 = 5, + txgbe_sfp_type_srlr_core1 = 6, + txgbe_sfp_type_da_act_lmt_core0 = 7, + txgbe_sfp_type_da_act_lmt_core1 = 8, + txgbe_sfp_type_1g_cu_core0 = 9, + txgbe_sfp_type_1g_cu_core1 = 10, + txgbe_sfp_type_1g_sx_core0 = 11, + txgbe_sfp_type_1g_sx_core1 = 12, + txgbe_sfp_type_1g_lx_core0 = 13, + txgbe_sfp_type_1g_lx_core1 = 14, + txgbe_sfp_type_not_present = 0xFFFE, + txgbe_sfp_type_unknown = 0xFFFF +}; + +enum txgbe_media_type { + txgbe_media_type_unknown = 0, + txgbe_media_type_fiber, + txgbe_media_type_copper, + txgbe_media_type_backplane, + txgbe_media_type_virtual +}; + +/* Flow Control Settings */ +enum txgbe_fc_mode { + txgbe_fc_none = 0, + txgbe_fc_rx_pause, + txgbe_fc_tx_pause, + txgbe_fc_full, + txgbe_fc_default +}; + +/* Smart Speed Settings */ +#define TXGBE_SMARTSPEED_MAX_RETRIES 3 +enum txgbe_smart_speed { + txgbe_smart_speed_auto = 0, + txgbe_smart_speed_on, + txgbe_smart_speed_off +}; + +/* PCI bus types */ +enum txgbe_bus_type { + txgbe_bus_type_unknown = 0, + txgbe_bus_type_pci, + txgbe_bus_type_pcix, + txgbe_bus_type_pci_express, + txgbe_bus_type_internal, + txgbe_bus_type_reserved +}; + +/* PCI bus speeds */ +enum txgbe_bus_speed { + txgbe_bus_speed_unknown = 0, + txgbe_bus_speed_33 = 33, + txgbe_bus_speed_66 = 66, + txgbe_bus_speed_100 = 100, + txgbe_bus_speed_120 = 120, + txgbe_bus_speed_133 = 133, + txgbe_bus_speed_2500 = 2500, + txgbe_bus_speed_5000 = 5000, + txgbe_bus_speed_8000 = 8000, + txgbe_bus_speed_reserved +}; + +/* PCI bus widths */ +enum txgbe_bus_width { + txgbe_bus_width_unknown = 0, + txgbe_bus_width_pcie_x1 = 1, + txgbe_bus_width_pcie_x2 = 2, + txgbe_bus_width_pcie_x4 = 4, + txgbe_bus_width_pcie_x8 = 8, + txgbe_bus_width_32 = 32, + txgbe_bus_width_64 = 64, + txgbe_bus_width_reserved +}; + +struct txgbe_addr_filter_info { + u32 num_mc_addrs; + u32 rar_used_count; + u32 mta_in_use; + u32 overflow_promisc; + bool user_set_promisc; +}; + +/* Bus parameters */ +struct txgbe_bus_info { + enum txgbe_bus_speed speed; + enum txgbe_bus_width width; + enum txgbe_bus_type type; + + u16 func; + u16 lan_id; +}; + +/* Flow control parameters */ +struct txgbe_fc_info { + u32 high_water[TXGBE_DCB_MAX_TRAFFIC_CLASS]; /* Flow Ctrl High-water */ + u32 low_water[TXGBE_DCB_MAX_TRAFFIC_CLASS]; /* Flow Ctrl Low-water */ + u16 pause_time; /* Flow Control Pause timer */ + bool send_xon; /* Flow control send XON */ + bool strict_ieee; /* Strict IEEE mode */ + bool disable_fc_autoneg; /* Do not autonegotiate FC */ + bool fc_was_autonegged; /* Is current_mode the result of autonegging? */ + enum txgbe_fc_mode current_mode; /* FC mode in effect */ + enum txgbe_fc_mode requested_mode; /* FC mode requested by caller */ +}; + +/* Statistics counters collected by the MAC */ +struct txgbe_hw_stats { + u64 crcerrs; + u64 illerrc; + u64 errbc; + u64 mspdc; + u64 mpctotal; + u64 mpc[8]; + u64 mlfc; + u64 mrfc; + u64 rlec; + u64 lxontxc; + u64 lxonrxc; + u64 lxofftxc; + u64 lxoffrxc; + u64 pxontxc[8]; + u64 pxonrxc[8]; + u64 pxofftxc[8]; + u64 pxoffrxc[8]; + u64 prc64; + u64 prc127; + u64 prc255; + u64 prc511; + u64 prc1023; + u64 prc1522; + u64 gprc; + u64 bprc; + u64 mprc; + u64 gptc; + u64 gorc; + u64 gotc; + u64 rnbc[8]; + u64 ruc; + u64 rfc; + u64 roc; + u64 rjc; + u64 mngprc; + u64 mngpdc; + u64 mngptc; + u64 tor; + u64 tpr; + u64 tpt; + u64 ptc64; + u64 ptc127; + u64 ptc255; + u64 ptc511; + u64 ptc1023; + u64 ptc1522; + u64 mptc; + u64 bptc; + u64 xec; + u64 qprc[16]; + u64 qptc[16]; + u64 qbrc[16]; + u64 qbtc[16]; + u64 qprdc[16]; + u64 pxon2offc[8]; + u64 fdirustat_add; + u64 fdirustat_remove; + u64 fdirfstat_fadd; + u64 fdirfstat_fremove; + u64 fdirmatch; + u64 fdirmiss; + u64 fccrc; + u64 fclast; + u64 fcoerpdc; + u64 fcoeprc; + u64 fcoeptc; + u64 fcoedwrc; + u64 fcoedwtc; + u64 fcoe_noddp; + u64 fcoe_noddp_ext_buff; + u64 ldpcec; + u64 pcrc8ec; + u64 b2ospc; + u64 b2ogprc; + u64 o2bgptc; + u64 o2bspc; +}; + +/* forward declaration */ +struct txgbe_hw; + +/* iterator type for walking multicast address lists */ +typedef u8* (*txgbe_mc_addr_itr) (struct txgbe_hw *hw, u8 **mc_addr_ptr, + u32 *vmdq); + +/* Function pointer table */ +struct txgbe_eeprom_operations { + s32 (*init_params)(struct txgbe_hw *); + s32 (*read)(struct txgbe_hw *, u16, u16 *); + s32 (*read_buffer)(struct txgbe_hw *, u16, u16, u16 *); + s32 (*write)(struct txgbe_hw *, u16, u16); + s32 (*write_buffer)(struct txgbe_hw *, u16, u16, u16 *); + s32 (*validate_checksum)(struct txgbe_hw *, u16 *); + s32 (*update_checksum)(struct txgbe_hw *); + s32 (*calc_checksum)(struct txgbe_hw *); +}; + +struct txgbe_flash_operations { + s32 (*init_params)(struct txgbe_hw *); + s32 (*read_buffer)(struct txgbe_hw *, u32, u32, u32 *); + s32 (*write_buffer)(struct txgbe_hw *, u32, u32, u32 *); +}; + +struct txgbe_mac_operations { + s32 (*init_hw)(struct txgbe_hw *); + s32 (*reset_hw)(struct txgbe_hw *); + s32 (*start_hw)(struct txgbe_hw *); + s32 (*clear_hw_cntrs)(struct txgbe_hw *); + enum txgbe_media_type (*get_media_type)(struct txgbe_hw *); + s32 (*get_mac_addr)(struct txgbe_hw *, u8 *); + s32 (*get_san_mac_addr)(struct txgbe_hw *, u8 *); + s32 (*set_san_mac_addr)(struct txgbe_hw *, u8 *); + s32 (*get_device_caps)(struct txgbe_hw *, u16 *); + s32 (*get_wwn_prefix)(struct txgbe_hw *, u16 *, u16 *); + s32 (*stop_adapter)(struct txgbe_hw *); + s32 (*get_bus_info)(struct txgbe_hw *); + void (*set_lan_id)(struct txgbe_hw *); + s32 (*enable_rx_dma)(struct txgbe_hw *, u32); + s32 (*disable_sec_rx_path)(struct txgbe_hw *); + s32 (*enable_sec_rx_path)(struct txgbe_hw *); + s32 (*acquire_swfw_sync)(struct txgbe_hw *, u32); + void (*release_swfw_sync)(struct txgbe_hw *, u32); + + /* Link */ + void (*disable_tx_laser)(struct txgbe_hw *); + void (*enable_tx_laser)(struct txgbe_hw *); + void (*flap_tx_laser)(struct txgbe_hw *); + s32 (*setup_link)(struct txgbe_hw *, u32, bool); + s32 (*setup_mac_link)(struct txgbe_hw *, u32, bool); + s32 (*check_link)(struct txgbe_hw *, u32 *, bool *, bool); + s32 (*get_link_capabilities)(struct txgbe_hw *, u32 *, + bool *); + void (*set_rate_select_speed)(struct txgbe_hw *, u32); + + /* Packet Buffer manipulation */ + void (*setup_rxpba)(struct txgbe_hw *, int, u32, int); + + /* LED */ + s32 (*led_on)(struct txgbe_hw *, u32); + s32 (*led_off)(struct txgbe_hw *, u32); + + /* RAR, Multicast, VLAN */ + s32 (*set_rar)(struct txgbe_hw *, u32, u8 *, u64, u32); + s32 (*clear_rar)(struct txgbe_hw *, u32); + s32 (*insert_mac_addr)(struct txgbe_hw *, u8 *, u32); + s32 (*set_vmdq)(struct txgbe_hw *, u32, u32); + s32 (*set_vmdq_san_mac)(struct txgbe_hw *, u32); + s32 (*clear_vmdq)(struct txgbe_hw *, u32, u32); + s32 (*init_rx_addrs)(struct txgbe_hw *); + s32 (*update_uc_addr_list)(struct txgbe_hw *, u8 *, u32, + txgbe_mc_addr_itr); + s32 (*update_mc_addr_list)(struct txgbe_hw *, u8 *, u32, + txgbe_mc_addr_itr, bool clear); + s32 (*enable_mc)(struct txgbe_hw *); + s32 (*disable_mc)(struct txgbe_hw *); + s32 (*clear_vfta)(struct txgbe_hw *); + s32 (*set_vfta)(struct txgbe_hw *, u32, u32, bool); + s32 (*set_vlvf)(struct txgbe_hw *, u32, u32, bool, bool *); + s32 (*init_uta_tables)(struct txgbe_hw *); + void (*set_mac_anti_spoofing)(struct txgbe_hw *, bool, int); + void (*set_vlan_anti_spoofing)(struct txgbe_hw *, bool, int); + + /* Flow Control */ + s32 (*fc_enable)(struct txgbe_hw *); + s32 (*setup_fc)(struct txgbe_hw *); + + /* Manageability interface */ + s32 (*set_fw_drv_ver)(struct txgbe_hw *, u8, u8, u8, u8); + s32 (*get_thermal_sensor_data)(struct txgbe_hw *); + s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw); + void (*get_rtrup2tc)(struct txgbe_hw *hw, u8 *map); + void (*disable_rx)(struct txgbe_hw *hw); + void (*enable_rx)(struct txgbe_hw *hw); + void (*set_source_address_pruning)(struct txgbe_hw *, bool, + unsigned int); + void (*set_ethertype_anti_spoofing)(struct txgbe_hw *, bool, int); + s32 (*dmac_config)(struct txgbe_hw *hw); + s32 (*setup_eee)(struct txgbe_hw *hw, bool enable_eee); +}; + +struct txgbe_phy_operations { + s32 (*identify)(struct txgbe_hw *); + s32 (*identify_sfp)(struct txgbe_hw *); + s32 (*init)(struct txgbe_hw *); + s32 (*reset)(struct txgbe_hw *); + s32 (*read_reg)(struct txgbe_hw *, u32, u32, u16 *); + s32 (*write_reg)(struct txgbe_hw *, u32, u32, u16); + s32 (*read_reg_mdi)(struct txgbe_hw *, u32, u32, u16 *); + s32 (*write_reg_mdi)(struct txgbe_hw *, u32, u32, u16); + u32 (*setup_link)(struct txgbe_hw *, u32, bool); + s32 (*setup_internal_link)(struct txgbe_hw *); + u32 (*setup_link_speed)(struct txgbe_hw *, u32, bool); + s32 (*check_link)(struct txgbe_hw *, u32 *, bool *); + s32 (*get_firmware_version)(struct txgbe_hw *, u16 *); + s32 (*read_i2c_byte)(struct txgbe_hw *, u8, u8, u8 *); + s32 (*write_i2c_byte)(struct txgbe_hw *, u8, u8, u8); + s32 (*read_i2c_sff8472)(struct txgbe_hw *, u8, u8 *); + s32 (*read_i2c_eeprom)(struct txgbe_hw *, u8, u8 *); + s32 (*write_i2c_eeprom)(struct txgbe_hw *, u8, u8); + s32 (*check_overtemp)(struct txgbe_hw *); +}; + +struct txgbe_eeprom_info { + struct txgbe_eeprom_operations ops; + enum txgbe_eeprom_type type; + u32 semaphore_delay; + u16 word_size; + u16 address_bits; + u16 word_page_size; + u16 ctrl_word_3; + u16 sw_region_offset; +}; + +struct txgbe_flash_info { + struct txgbe_flash_operations ops; + u32 semaphore_delay; + u32 dword_size; + u16 address_bits; +}; + + +#define TXGBE_FLAGS_DOUBLE_RESET_REQUIRED 0x01 +struct txgbe_mac_info { + struct txgbe_mac_operations ops; + u8 addr[TXGBE_ETH_LENGTH_OF_ADDRESS]; + u8 perm_addr[TXGBE_ETH_LENGTH_OF_ADDRESS]; + u8 san_addr[TXGBE_ETH_LENGTH_OF_ADDRESS]; + /* prefix for World Wide Node Name (WWNN) */ + u16 wwnn_prefix; + /* prefix for World Wide Port Name (WWPN) */ + u16 wwpn_prefix; +#define TXGBE_MAX_MTA 128 +#define TXGBE_MAX_VFTA_ENTRIES 128 + u32 mta_shadow[TXGBE_MAX_MTA]; + s32 mc_filter_type; + u32 mcft_size; + u32 vft_shadow[TXGBE_MAX_VFTA_ENTRIES]; + u32 vft_size; + u32 num_rar_entries; + u32 rar_highwater; + u32 rx_pb_size; + u32 max_tx_queues; + u32 max_rx_queues; + u32 orig_sr_pcs_ctl2; + u32 orig_sr_pma_mmd_ctl1; + u32 orig_sr_an_mmd_ctl; + u32 orig_sr_an_mmd_adv_reg2; + u32 orig_vr_xs_or_pcs_mmd_digi_ctl1; + u8 san_mac_rar_index; + bool get_link_status; + u16 max_msix_vectors; + bool arc_subsystem_valid; + bool orig_link_settings_stored; + bool autotry_restart; + u8 flags; + struct txgbe_thermal_sensor_data thermal_sensor_data; + bool thermal_sensor_enabled; + struct txgbe_dmac_config dmac_config; + bool set_lben; +}; + +struct txgbe_phy_info { + struct txgbe_phy_operations ops; + enum txgbe_phy_type type; + u32 addr; + u32 id; + enum txgbe_sfp_type sfp_type; + bool sfp_setup_needed; + u32 revision; + enum txgbe_media_type media_type; + u32 phy_semaphore_mask; + u8 lan_id; /* to be delete */ + txgbe_autoneg_advertised autoneg_advertised; + enum txgbe_smart_speed smart_speed; + bool smart_speed_active; + bool multispeed_fiber; + bool reset_if_overtemp; + txgbe_physical_layer link_mode; +}; + +#include "txgbe_mbx.h" + +struct txgbe_mbx_operations { + void (*init_params)(struct txgbe_hw *hw); + s32 (*read)(struct txgbe_hw *, u32 *, u16, u16); + s32 (*write)(struct txgbe_hw *, u32 *, u16, u16); + s32 (*read_posted)(struct txgbe_hw *, u32 *, u16, u16); + s32 (*write_posted)(struct txgbe_hw *, u32 *, u16, u16); + s32 (*check_for_msg)(struct txgbe_hw *, u16); + s32 (*check_for_ack)(struct txgbe_hw *, u16); + s32 (*check_for_rst)(struct txgbe_hw *, u16); +}; + +struct txgbe_mbx_stats { + u32 msgs_tx; + u32 msgs_rx; + + u32 acks; + u32 reqs; + u32 rsts; +}; + +struct txgbe_mbx_info { + struct txgbe_mbx_operations ops; + struct txgbe_mbx_stats stats; + u32 timeout; + u32 udelay; + u32 v2p_mailbox; + u16 size; +}; + +enum txgbe_reset_type { + TXGBE_LAN_RESET = 0, + TXGBE_SW_RESET, + TXGBE_GLOBAL_RESET +}; + +enum txgbe_link_status { + TXGBE_LINK_STATUS_NONE = 0, + TXGBE_LINK_STATUS_KX, + TXGBE_LINK_STATUS_KX4 +}; + +struct txgbe_hw { + u8 __iomem *hw_addr; + void *back; + struct txgbe_mac_info mac; + struct txgbe_addr_filter_info addr_ctrl; + struct txgbe_fc_info fc; + struct txgbe_phy_info phy; + struct txgbe_eeprom_info eeprom; + struct txgbe_flash_info flash; + struct txgbe_bus_info bus; + struct txgbe_mbx_info mbx; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; + bool adapter_stopped; + int api_version; + enum txgbe_reset_type reset_type; + bool force_full_reset; + bool allow_unsupported_sfp; + bool wol_enabled; +#if defined(TXGBE_SUPPORT_KYLIN_FT) + bool Fdir_enabled; +#endif + MTD_DEV phy_dev; + enum txgbe_link_status link_status; + u16 subsystem_id; + u16 tpid[8]; +}; + +#define TCALL(hw, func, args...) (((hw)->func != NULL) \ + ? (hw)->func((hw), ##args) : TXGBE_NOT_IMPLEMENTED) + +/* Error Codes */ +#define TXGBE_ERR 100 +#define TXGBE_NOT_IMPLEMENTED 0x7FFFFFFF +/* (-TXGBE_ERR, TXGBE_ERR): reserved for non-txgbe defined error code */ +#define TXGBE_ERR_NOSUPP -(TXGBE_ERR+0) +#define TXGBE_ERR_EEPROM -(TXGBE_ERR+1) +#define TXGBE_ERR_EEPROM_CHECKSUM -(TXGBE_ERR+2) +#define TXGBE_ERR_PHY -(TXGBE_ERR+3) +#define TXGBE_ERR_CONFIG -(TXGBE_ERR+4) +#define TXGBE_ERR_PARAM -(TXGBE_ERR+5) +#define TXGBE_ERR_MAC_TYPE -(TXGBE_ERR+6) +#define TXGBE_ERR_UNKNOWN_PHY -(TXGBE_ERR+7) +#define TXGBE_ERR_LINK_SETUP -(TXGBE_ERR+8) +#define TXGBE_ERR_ADAPTER_STOPPED -(TXGBE_ERR+09) +#define TXGBE_ERR_INVALID_MAC_ADDR -(TXGBE_ERR+10) +#define TXGBE_ERR_DEVICE_NOT_SUPPORTED -(TXGBE_ERR+11) +#define TXGBE_ERR_MASTER_REQUESTS_PENDING -(TXGBE_ERR+12) +#define TXGBE_ERR_INVALID_LINK_SETTINGS -(TXGBE_ERR+13) +#define TXGBE_ERR_AUTONEG_NOT_COMPLETE -(TXGBE_ERR+14) +#define TXGBE_ERR_RESET_FAILED -(TXGBE_ERR+15) +#define TXGBE_ERR_SWFW_SYNC -(TXGBE_ERR+16) +#define TXGBE_ERR_PHY_ADDR_INVALID -(TXGBE_ERR+17) +#define TXGBE_ERR_I2C -(TXGBE_ERR+18) +#define TXGBE_ERR_SFP_NOT_SUPPORTED -(TXGBE_ERR+19) +#define TXGBE_ERR_SFP_NOT_PRESENT -(TXGBE_ERR+20) +#define TXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT -(TXGBE_ERR+21) +#define TXGBE_ERR_NO_SAN_ADDR_PTR -(TXGBE_ERR+22) +#define TXGBE_ERR_FDIR_REINIT_FAILED -(TXGBE_ERR+23) +#define TXGBE_ERR_EEPROM_VERSION -(TXGBE_ERR+24) +#define TXGBE_ERR_NO_SPACE -(TXGBE_ERR+25) +#define TXGBE_ERR_OVERTEMP -(TXGBE_ERR+26) +#define TXGBE_ERR_UNDERTEMP -(TXGBE_ERR+27) +#define TXGBE_ERR_FC_NOT_NEGOTIATED -(TXGBE_ERR+28) +#define TXGBE_ERR_FC_NOT_SUPPORTED -(TXGBE_ERR+29) +#define TXGBE_ERR_SFP_SETUP_NOT_COMPLETE -(TXGBE_ERR+30) +#define TXGBE_ERR_PBA_SECTION -(TXGBE_ERR+31) +#define TXGBE_ERR_INVALID_ARGUMENT -(TXGBE_ERR+32) +#define TXGBE_ERR_HOST_INTERFACE_COMMAND -(TXGBE_ERR+33) +#define TXGBE_ERR_OUT_OF_MEM -(TXGBE_ERR+34) +#define TXGBE_ERR_FEATURE_NOT_SUPPORTED -(TXGBE_ERR+36) +#define TXGBE_ERR_EEPROM_PROTECTED_REGION -(TXGBE_ERR+37) +#define TXGBE_ERR_FDIR_CMD_INCOMPLETE -(TXGBE_ERR+38) +#define TXGBE_ERR_FLASH_LOADING_FAILED -(TXGBE_ERR+39) +#define TXGBE_ERR_XPCS_POWER_UP_FAILED -(TXGBE_ERR+40) +#define TXGBE_ERR_FW_RESP_INVALID -(TXGBE_ERR+41) +#define TXGBE_ERR_PHY_INIT_NOT_DONE -(TXGBE_ERR+42) +#define TXGBE_ERR_TIMEOUT -(TXGBE_ERR+43) +#define TXGBE_ERR_TOKEN_RETRY -(TXGBE_ERR+44) +#define TXGBE_ERR_REGISTER -(TXGBE_ERR+45) +#define TXGBE_ERR_MBX -(TXGBE_ERR+46) +#define TXGBE_ERR_MNG_ACCESS_FAILED -(TXGBE_ERR+47) + +/** + * register operations + **/ +/* read register */ +#define TXGBE_DEAD_READ_RETRIES 10 +#define TXGBE_DEAD_READ_REG 0xdeadbeefU +#define TXGBE_DEAD_READ_REG64 0xdeadbeefdeadbeefULL +#define TXGBE_FAILED_READ_REG 0xffffffffU +#define TXGBE_FAILED_READ_REG64 0xffffffffffffffffULL + +static inline bool TXGBE_REMOVED(void __iomem *addr) +{ + return unlikely(!addr); +} + +static inline u32 +txgbe_rd32(u8 __iomem *base) +{ + return readl(base); +} + +static inline u32 +rd32(struct txgbe_hw *hw, u32 reg) +{ + u8 __iomem *base = READ_ONCE(hw->hw_addr); + u32 val = TXGBE_FAILED_READ_REG; + + if (unlikely(!base)) + return val; + + val = txgbe_rd32(base + reg); + + return val; +} +#define rd32a(a, reg, offset) ( \ + rd32((a), (reg) + ((offset) << 2))) + +static inline u32 +rd32m(struct txgbe_hw *hw, u32 reg, u32 mask) +{ + u8 __iomem *base = READ_ONCE(hw->hw_addr); + u32 val = TXGBE_FAILED_READ_REG; + + if (unlikely(!base)) + return val; + + val = txgbe_rd32(base + reg); + if (unlikely(val == TXGBE_FAILED_READ_REG)) + return val; + + return val & mask; +} + +/* write register */ +static inline void +txgbe_wr32(u8 __iomem *base, u32 val) +{ + writel(val, base); +} + +static inline void +wr32(struct txgbe_hw *hw, u32 reg, u32 val) +{ + u8 __iomem *base = READ_ONCE(hw->hw_addr); + + if (unlikely(!base)) + return; + + txgbe_wr32(base + reg, val); +} +#define wr32a(a, reg, off, val) \ + wr32((a), (reg) + ((off) << 2), (val)) + +static inline void +wr32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 field) +{ + u8 __iomem *base = READ_ONCE(hw->hw_addr); + u32 val; + + if (unlikely(!base)) + return; + + val = txgbe_rd32(base + reg); + if (unlikely(val == TXGBE_FAILED_READ_REG)) + return; + + val = ((val & ~mask) | (field & mask)); + txgbe_wr32(base + reg, val); +} + +/* poll register */ +#define TXGBE_MDIO_TIMEOUT 1000 +#define TXGBE_I2C_TIMEOUT 1000 +#define TXGBE_SPI_TIMEOUT 1000 +static inline s32 +po32m(struct txgbe_hw *hw, u32 reg, + u32 mask, u32 field, int usecs, int count) +{ + int loop; + + loop = (count ? count : (usecs + 9) / 10); + usecs = (loop ? (usecs + loop - 1) / loop : 0); + + count = loop; + do { + u32 value = rd32(hw, reg); + if ((value & mask) == (field & mask)) { + break; + } + + if (loop-- <= 0) + break; + + udelay(usecs); + } while (true); + + return (count - loop <= count ? 0 : TXGBE_ERR_TIMEOUT); +} + +#define TXGBE_WRITE_FLUSH(H) rd32(H, TXGBE_MIS_PWR) + +#endif /* _TXGBE_TYPE_H_ */
From: zhenpengzheng zhenpengzheng@net-swift.com
driver inclusion category: feature bugzilla: 50777 CVE: NA
------------------------------------------------------------------------- Set CONFIG_TXGBE=m, so that its ko can be distributed in ISO on X86.
Signed-off-by: zhenpengzheng zhenpengzheng@net-swift.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/configs/openeuler_defconfig | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig index 36f41978fd8c..208c408c358a 100644 --- a/arch/x86/configs/openeuler_defconfig +++ b/arch/x86/configs/openeuler_defconfig @@ -2506,6 +2506,8 @@ CONFIG_BE2NET_SKYHAWK=y # CONFIG_NET_VENDOR_EZCHIP is not set # CONFIG_NET_VENDOR_HP is not set # CONFIG_NET_VENDOR_I825XX is not set +CONFIG_NET_VENDOR_NETSWIFT=y +CONFIG_TXGBE=m CONFIG_NET_VENDOR_INTEL=y # CONFIG_E100 is not set CONFIG_E1000=m
From: Yang Yingliang yangyingliang@huawei.com
driver inclusion category: feature bugzilla: 50777 CVE: NA
------------------------------------------------------------------------- enable config TXGBE by default on arm64 for compile test.
Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/arm64/configs/hulk_defconfig | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig index 9467ed0c3ca0..951c56437ac1 100644 --- a/arch/arm64/configs/hulk_defconfig +++ b/arch/arm64/configs/hulk_defconfig @@ -2453,6 +2453,8 @@ CONFIG_I40E=m CONFIG_I40EVF=m CONFIG_ICE=m CONFIG_FM10K=m +CONFIG_NET_VENDOR_NETSWIFT=y +CONFIG_TXGBE=m # CONFIG_JME is not set # CONFIG_NET_VENDOR_MARVELL is not set CONFIG_NET_VENDOR_MELLANOX=y
From: Li ZhiGang lizhigang@kylinos.cn
driver inclusion category: feature bugzilla: 50797 CVE: NA
-------------------------------------------------------------------------
Nationz Tech TCM are used for trusted computing, the chip attached via SPI or LPC. We have a brief verify/test with this driver on KunPeng920 + openEuler system, with externally compiled module.
Signed-off-by: Li ZhiGang lizhigang@kylinos.cn Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 1 + drivers/staging/gmjstcm/Kconfig | 21 + drivers/staging/gmjstcm/Makefile | 3 + drivers/staging/gmjstcm/tcm.c | 949 ++++++++++++++++++++++++++ drivers/staging/gmjstcm/tcm.h | 122 ++++ drivers/staging/gmjstcm/tcm_tis_spi.c | 847 +++++++++++++++++++++++ 7 files changed, 1945 insertions(+) create mode 100644 drivers/staging/gmjstcm/Kconfig create mode 100644 drivers/staging/gmjstcm/Makefile create mode 100644 drivers/staging/gmjstcm/tcm.c create mode 100644 drivers/staging/gmjstcm/tcm.h create mode 100644 drivers/staging/gmjstcm/tcm_tis_spi.c
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig index 1abf76be2aa8..d51fa4f4e7ca 100644 --- a/drivers/staging/Kconfig +++ b/drivers/staging/Kconfig @@ -126,4 +126,6 @@ source "drivers/staging/axis-fifo/Kconfig"
source "drivers/staging/erofs/Kconfig"
+source "drivers/staging/gmjstcm/Kconfig" + endif # STAGING diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile index ab0cbe8815b1..1562b51985d0 100644 --- a/drivers/staging/Makefile +++ b/drivers/staging/Makefile @@ -53,3 +53,4 @@ obj-$(CONFIG_SOC_MT7621) += mt7621-dts/ obj-$(CONFIG_STAGING_GASKET_FRAMEWORK) += gasket/ obj-$(CONFIG_XIL_AXIS_FIFO) += axis-fifo/ obj-$(CONFIG_EROFS_FS) += erofs/ +obj-$(CONFIG_GMJS_TCM) += gmjstcm/ diff --git a/drivers/staging/gmjstcm/Kconfig b/drivers/staging/gmjstcm/Kconfig new file mode 100644 index 000000000000..5b5397ae1832 --- /dev/null +++ b/drivers/staging/gmjstcm/Kconfig @@ -0,0 +1,21 @@ +menu "GMJS TCM support" + +config GMJS_TCM + bool + +config GMJS_TCM_CORE + tristate "GMJS TCM core support" + depends on ARM64 || MIPS + default m + select GMJS_TCM + help + GMJS TCM core support. + +config GMJS_TCM_SPI + tristate "GMJS TCM support on SPI interface" + depends on GMJS_TCM_CORE && SPI_MASTER + default m + help + GMJS TCM support on SPI interface. + +endmenu diff --git a/drivers/staging/gmjstcm/Makefile b/drivers/staging/gmjstcm/Makefile new file mode 100644 index 000000000000..369f01119372 --- /dev/null +++ b/drivers/staging/gmjstcm/Makefile @@ -0,0 +1,3 @@ +obj-$(CONFIG_GMJS_TCM_CORE) += tcm_core.o +tcm_core-objs := tcm.o +obj-$(CONFIG_GMJS_TCM_SPI) += tcm_tis_spi.o diff --git a/drivers/staging/gmjstcm/tcm.c b/drivers/staging/gmjstcm/tcm.c new file mode 100644 index 000000000000..5c41bfa8b423 --- /dev/null +++ b/drivers/staging/gmjstcm/tcm.c @@ -0,0 +1,949 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2009 Nationz Technologies Inc. + * + * Description: Exprot symbol for tcm_tis module + * + * Major Function: public write read register function etc. + * + */ + +#include <linux/sched.h> +#include <linux/poll.h> +#include <linux/spinlock.h> +#include <linux/timer.h> +#include "tcm.h" + +/* + * const var + */ +enum tcm_const { + TCM_MINOR = 224, /* officially assigned */ + TCM_BUFSIZE = 2048, /* Buffer Size */ + TCM_NUM_DEVICES = 256, /* Max supporting tcm device number */ +}; + +/* + * CMD duration + */ +enum tcm_duration { + TCM_SHORT = 0, + TCM_MEDIUM = 1, + TCM_LONG = 2, + TCM_UNDEFINED, +}; + +/* Max Total of Command Number */ +#define TCM_MAX_ORDINAL 88 /*243*/ + +static LIST_HEAD(tcm_chip_list); +static DEFINE_SPINLOCK(driver_lock); /* spin lock */ +static DECLARE_BITMAP(dev_mask, TCM_NUM_DEVICES); + +typedef struct tagTCM_Command { + u8 ordinal; + u8 DURATION; +} TCM_Command; + +static const TCM_Command TCM_Command_List[TCM_MAX_ORDINAL + 1] = { + {/*TCM_ORD_ActivateIdentity, */122, 1}, + {/*TCM_ORD_CertifyKey, */50, 1}, + {/*TCM_ORD_CertifyKeyM, */51, 1}, + {/*TCM_ORD_ChangeAuth, */12, 1}, + {/*TCM_ORD_ChangeAuthOwner, */16, 0}, + {/*TCM_ORD_ContinueSelfTeSt, */83, 2}, + {/*TCM_ORD_CreateCounter, */220, 0}, + {/*TCM_ORD_CreateWrapKey, */31, 2}, + {/*TCM_ORD_DiSableForceClear, */94, 0}, + {/*TCM_ORD_DiSableOwnerClear, */92, 0}, + {/*TCM_ORD_EStabliShTranSport, */230, 0}, + {/*TCM_ORD_ExecuteTranSport, */231, 2}, + {/*TCM_ORD_Extend, */20, 0}, + {/*TCM_ORD_FieldUpgrade, */170, 2}, + {/*TCM_ORD_FluShSpecific, */186, 0}, + {/*TCM_ORD_ForceClear, */93, 0}, + {/*TCM_ORD_GetAuditDigeSt, */133, 0}, + {/*TCM_ORD_GetAuditDigeStSigned, */134, 1}, + {/*TCM_ORD_GetCapability, */101, 0}, + {/*TCM_ORD_GetPubKey, */33, 0}, + {/*TCM_ORD_GetRandoM, */70, 0}, + {/*TCM_ORD_GetTeStReSult, */84, 0}, + {/*TCM_ORD_GetTickS, */241, 0}, + {/*TCM_ORD_IncreMentCounter, */221, 0}, + {/*TCM_ORD_LoadContext, */185, 1}, + {/*TCM_ORD_MakeIdentity, */121, 2}, + {/*TCM_ORD_NV_DefineSpace, */204, 0}, + {/*TCM_ORD_NV_ReadValue, */207, 0}, + {/*TCM_ORD_NV_ReadValueAuth, */208, 0}, + {/*TCM_ORD_NV_WriteValue, */205, 0}, + {/*TCM_ORD_NV_WriteValueAuth, */206, 0}, + {/*TCM_ORD_OwnerClear, */91, 0}, + {/*TCM_ORD_OwnerReadInternalPub, */129, 0}, + {/*TCM_ORD_OwnerSetDiSable, */110, 0}, + {/*TCM_ORD_PCR_ReSet, */200, 0}, + {/*TCM_ORD_PcrRead, */21, 0}, + {/*TCM_ORD_PhySicalDiSable, */112, 0}, + {/*TCM_ORD_PhySicalEnable, */111, 0}, + {/*TCM_ORD_PhySicalSetDeactivated, */114, 0}, + {/*TCM_ORD_Quote, */22, 1}, + {/*TCM_ORD_QuoteM, */62, 1}, + {/*TCM_ORD_ReadCounter, */222, 0}, + {/*TCM_ORD_ReadPubek, */124, 0}, + {/*TCM_ORD_ReleaSeCounter, */223, 0}, + {/*TCM_ORD_ReleaSeCounterOwner, */224, 0}, + {/*TCM_ORD_ReleaSeTranSportSigned, */232, 1}, + {/*TCM_ORD_ReSetLockValue, */64, 0}, + {/*TCM_ORD_RevokeTruSt, */128, 0}, + {/*TCM_ORD_SaveContext, */184, 1}, + {/*TCM_ORD_SaveState, */152, 1}, + {/*TCM_ORD_Seal, */23, 1}, + {/*TCM_ORD_Sealx, */61, 1}, + {/*TCM_ORD_SelfTeStFull, */80, 2}, + {/*TCM_ORD_SetCapability, */63, 0}, + {/*TCM_ORD_SetOperatorAuth, */116, 0}, + {/*TCM_ORD_SetOrdinalAuditStatuS, */141, 0}, + {/*TCM_ORD_SetOwnerInStall, */113, 0}, + {/*TCM_ORD_SetTeMpDeactivated, */115, 0}, + {/*TCM_ORD_Sign, */60, 1}, + {/*TCM_ORD_Startup, */153, 0}, + {/*TCM_ORD_TakeOwnerShip, */13, 1}, + {/*TCM_ORD_TickStaMpBlob, */242, 1}, + {/*TCM_ORD_UnSeal, */24, 1}, + {/*TSC_ORD_PhySicalPreSence, */10, 0}, + {/*TSC_ORD_ReSetEStabliShMentBit, */11, 0}, + {/*TCM_ORD_WrapKey, */189, 2}, + {/*TCM_ORD_APcreate, */191, 0}, + {/*TCM_ORD_APTerMinate, */192, 0}, + {/*TCM_ORD_CreateMigratedBlob, */193, 1}, + {/*TCM_ORD_ConvertMigratedBlob, */194, 1}, + {/*TCM_ORD_AuthorizeMigrationKey, */195, 0}, + {/*TCM_ORD_SMS4Encrypt, */197, 1}, + {/*TCM_ORD_SMS4Decrypt, */198, 1}, + {/*TCM_ORD_ReadEKCert, */199, 1}, + {/*TCM_ORD_WriteEKCert, */233, 1}, + {/*TCM_ORD_SCHStart, */234, 0}, + {/*TCM_ORD_SCHUpdata, */235, 0}, + {/*TCM_ORD_SCHCoMplete, */236, 0}, + {/*TCM_ORD_SCHCoMpleteExtend, */237, 0}, + {/*TCM_ORD_ECCDecrypt, */238, 1}, + {/*TCM_ORD_LoadKey, */239, 1}, + {/*TCM_ORD_CreateEndorSeMentKeyPair, */120, 2}, + {/*TCM_ORD_CreateRevocableEK, */127, 2}, + {/*TCM_ORD_ReleaSeECCExchangeSeSSion, */174, 1}, + {/*TCM_ORD_CreateECCExchangeSeSSion, */175, 1}, + {/*TCM_ORD_GetKeyECCExchangeSeSSion, */176, 1}, + {/*TCM_ORD_ActivatePEK, */217, 1}, + {/*TCM_ORD_ActivatePEKCert, */218, 1}, + {0, 0} +}; + +static void user_reader_timeout(struct timer_list *t) +{ + struct tcm_chip *chip = from_timer(chip, t, user_read_timer); + + schedule_work(&chip->work); +} + +static void timeout_work(struct work_struct *work) +{ + struct tcm_chip *chip = container_of(work, struct tcm_chip, work); + + mutex_lock(&chip->buffer_mutex); + atomic_set(&chip->data_pending, 0); + memset(chip->data_buffer, 0, TCM_BUFSIZE); + mutex_unlock(&chip->buffer_mutex); +} + +unsigned long tcm_calc_ordinal_duration(struct tcm_chip *chip, + u32 ordinal) +{ + int duration_idx = TCM_UNDEFINED; + int duration = 0; + int i = 0; + + for (i = 0; i < TCM_MAX_ORDINAL; i++) { + if (ordinal == TCM_Command_List[i].ordinal) { + duration_idx = TCM_Command_List[i].DURATION; + break; + } + } + + if (duration_idx != TCM_UNDEFINED) + duration = chip->vendor.duration[duration_idx]; + if (duration <= 0) + return 2 * 60 * HZ; + else + return duration; +} +EXPORT_SYMBOL_GPL(tcm_calc_ordinal_duration); + +/* + * Internal kernel interface to transmit TCM commands + * buff format: TAG(2 bytes) + Total Size(4 bytes ) + + * Command Ordinal(4 bytes ) + ...... + */ +static ssize_t tcm_transmit(struct tcm_chip *chip, const char *buf, + size_t bufsiz) +{ + ssize_t rc = 0; + u32 count = 0, ordinal = 0; + unsigned long stop = 0; + + count = be32_to_cpu(*((__be32 *)(buf + 2))); /* buff size */ + ordinal = be32_to_cpu(*((__be32 *)(buf + 6))); /* command ordinal */ + + if (count == 0) + return -ENODATA; + if (count > bufsiz) { /* buff size err ,invalid buff stream */ + dev_err(chip->dev, "invalid count value %x, %zx\n", + count, bufsiz); + return -E2BIG; + } + + mutex_lock(&chip->tcm_mutex); /* enter mutex */ + + rc = chip->vendor.send(chip, (u8 *)buf, count); + if (rc < 0) { + dev_err(chip->dev, "%s: tcm_send: error %zd\n", + __func__, rc); + goto out; + } + + if (chip->vendor.irq) + goto out_recv; + + stop = jiffies + tcm_calc_ordinal_duration(chip, + ordinal); /* cmd duration */ + do { + u8 status = chip->vendor.status(chip); + + if ((status & chip->vendor.req_complete_mask) == + chip->vendor.req_complete_val) + goto out_recv; + + if ((status == chip->vendor.req_canceled)) { + dev_err(chip->dev, "Operation Canceled\n"); + rc = -ECANCELED; + goto out; + } + + msleep(TCM_TIMEOUT); /* CHECK */ + rmb(); + } while (time_before(jiffies, stop)); + /* time out */ + chip->vendor.cancel(chip); + dev_err(chip->dev, "Operation Timed out\n"); + rc = -ETIME; + goto out; + +out_recv: + rc = chip->vendor.recv(chip, (u8 *)buf, bufsiz); + if (rc < 0) + dev_err(chip->dev, "%s: tcm_recv: error %zd\n", + __func__, rc); +out: + mutex_unlock(&chip->tcm_mutex); + return rc; +} + +#define TCM_DIGEST_SIZE 32 +#define TCM_ERROR_SIZE 10 +#define TCM_RET_CODE_IDX 6 +#define TCM_GET_CAP_RET_SIZE_IDX 10 +#define TCM_GET_CAP_RET_UINT32_1_IDX 14 +#define TCM_GET_CAP_RET_UINT32_2_IDX 18 +#define TCM_GET_CAP_RET_UINT32_3_IDX 22 +#define TCM_GET_CAP_RET_UINT32_4_IDX 26 +#define TCM_GET_CAP_PERM_DISABLE_IDX 16 +#define TCM_GET_CAP_PERM_INACTIVE_IDX 18 +#define TCM_GET_CAP_RET_BOOL_1_IDX 14 +#define TCM_GET_CAP_TEMP_INACTIVE_IDX 16 + +#define TCM_CAP_IDX 13 +#define TCM_CAP_SUBCAP_IDX 21 + +enum tcm_capabilities { + TCM_CAP_FLAG = 4, + TCM_CAP_PROP = 5, +}; + +enum tcm_sub_capabilities { + TCM_CAP_PROP_PCR = 0x1, /* tcm 0x101 */ + TCM_CAP_PROP_MANUFACTURER = 0x3, /* tcm 0x103 */ + TCM_CAP_FLAG_PERM = 0x8, /* tcm 0x108 */ + TCM_CAP_FLAG_VOL = 0x9, /* tcm 0x109 */ + TCM_CAP_PROP_OWNER = 0x11, /* tcm 0x101 */ + TCM_CAP_PROP_TIS_TIMEOUT = 0x15, /* tcm 0x115 */ + TCM_CAP_PROP_TIS_DURATION = 0x20, /* tcm 0x120 */ +}; + +/* + * This is a semi generic GetCapability command for use + * with the capability type TCM_CAP_PROP or TCM_CAP_FLAG + * and their associated sub_capabilities. + */ + +static const u8 tcm_cap[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND 0xc1*/ + 0, 0, 0, 22, /* length */ + 0, 0, 128, 101, /* TCM_ORD_GetCapability */ + 0, 0, 0, 0, /* TCM_CAP_<TYPE> */ + 0, 0, 0, 4, /* TCM_CAP_SUB_<TYPE> size */ + 0, 0, 1, 0 /* TCM_CAP_SUB_<TYPE> */ +}; + +static ssize_t transmit_cmd(struct tcm_chip *chip, u8 *data, int len, + char *desc) +{ + int err = 0; + + len = tcm_transmit(chip, data, len); + if (len < 0) + return len; + if (len == TCM_ERROR_SIZE) { + err = be32_to_cpu(*((__be32 *)(data + TCM_RET_CODE_IDX))); + dev_dbg(chip->dev, "A TCM error (%d) occurred %s\n", err, desc); + return err; + } + return 0; +} + +/* + * Get default timeouts value form tcm by GetCapability with TCM_CAP_PROP_TIS_TIMEOUT prop + */ +void tcm_get_timeouts(struct tcm_chip *chip) +{ + u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 30)]; + ssize_t rc = 0; + u32 timeout = 0; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_PROP; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_TIS_TIMEOUT; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the timeouts"); + if (rc) + goto duration; + + if (be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_SIZE_IDX))) != + 4 * sizeof(u32)) + goto duration; + + /* Don't overwrite default if value is 0 */ + timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_1_IDX))); + if (timeout) + chip->vendor.timeout_a = msecs_to_jiffies(timeout); + timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_2_IDX))); + if (timeout) + chip->vendor.timeout_b = msecs_to_jiffies(timeout); + timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_3_IDX))); + if (timeout) + chip->vendor.timeout_c = msecs_to_jiffies(timeout); + timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_4_IDX))); + if (timeout) + chip->vendor.timeout_d = msecs_to_jiffies(timeout); + +duration: + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_PROP; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_TIS_DURATION; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the durations"); + if (rc) + return; + + if (be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_SIZE_IDX))) != + 3 * sizeof(u32)) + return; + + chip->vendor.duration[TCM_SHORT] = + msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data + + TCM_GET_CAP_RET_UINT32_1_IDX)))); + chip->vendor.duration[TCM_MEDIUM] = + msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data + + TCM_GET_CAP_RET_UINT32_2_IDX)))); + chip->vendor.duration[TCM_LONG] = + msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data + + TCM_GET_CAP_RET_UINT32_3_IDX)))); +} +EXPORT_SYMBOL_GPL(tcm_get_timeouts); + +ssize_t tcm_show_enabled(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 35)]; + ssize_t rc = 0; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_FLAG; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_PERM; + + rc = transmit_cmd(chip, data, sizeof(data), + "attemtping to determine the permanent state"); + if (rc) + return 0; + if (data[TCM_GET_CAP_PERM_DISABLE_IDX]) + return sprintf(buf, "disable\n"); + else + return sprintf(buf, "enable\n"); +} +EXPORT_SYMBOL_GPL(tcm_show_enabled); + +ssize_t tcm_show_active(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 35)]; + ssize_t rc = 0; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_FLAG; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_PERM; + + rc = transmit_cmd(chip, data, sizeof(data), + "attemtping to determine the permanent state"); + if (rc) + return 0; + if (data[TCM_GET_CAP_PERM_INACTIVE_IDX]) + return sprintf(buf, "deactivated\n"); + else + return sprintf(buf, "activated\n"); +} +EXPORT_SYMBOL_GPL(tcm_show_active); + +ssize_t tcm_show_owned(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[sizeof(tcm_cap)]; + ssize_t rc = 0; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_PROP; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_OWNER; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the owner state"); + if (rc) + return 0; + if (data[TCM_GET_CAP_RET_BOOL_1_IDX]) + return sprintf(buf, "Owner installed\n"); + else + return sprintf(buf, "Owner have not installed\n"); +} +EXPORT_SYMBOL_GPL(tcm_show_owned); + +ssize_t tcm_show_temp_deactivated(struct device *dev, + struct device_attribute *attr, char *buf) +{ + u8 data[sizeof(tcm_cap)]; + ssize_t rc = 0; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_FLAG; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_VOL; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the temporary state"); + if (rc) + return 0; + if (data[TCM_GET_CAP_TEMP_INACTIVE_IDX]) + return sprintf(buf, "Temp deactivated\n"); + else + return sprintf(buf, "activated\n"); +} +EXPORT_SYMBOL_GPL(tcm_show_temp_deactivated); + +static const u8 pcrread[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 14, /* length */ + 0, 0, 128, 21, /* TCM_ORD_PcrRead */ + 0, 0, 0, 0 /* PCR index */ +}; + +ssize_t tcm_show_pcrs(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[1024]; + ssize_t rc = 0; + int i = 0, j = 0, num_pcrs = 0; + __be32 index = 0; + char *str = buf; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_PROP; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_PCR; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the number of PCRS"); + if (rc) + return 0; + + num_pcrs = be32_to_cpu(*((__be32 *)(data + 14))); + for (i = 0; i < num_pcrs; i++) { + memcpy(data, pcrread, sizeof(pcrread)); + index = cpu_to_be32(i); + memcpy(data + 10, &index, 4); + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to read a PCR"); + if (rc) + goto out; + str += sprintf(str, "PCR-%02d: ", i); + for (j = 0; j < TCM_DIGEST_SIZE; j++) + str += sprintf(str, "%02X ", *(data + 10 + j)); + str += sprintf(str, "\n"); + memset(data, 0, 1024); + } +out: + return str - buf; +} +EXPORT_SYMBOL_GPL(tcm_show_pcrs); + +#define READ_PUBEK_RESULT_SIZE 128 +static const u8 readpubek[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 42, /* length */ + 0, 0, 128, 124, /* TCM_ORD_ReadPubek */ + 0, 0, 0, 0, 0, 0, 0, 0, /* NONCE */ + 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0 +}; + +ssize_t tcm_show_pubek(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[READ_PUBEK_RESULT_SIZE] = {0}; + ssize_t err = 0; + int i = 0, rc = 0; + char *str = buf; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, readpubek, sizeof(readpubek)); + + err = transmit_cmd(chip, data, sizeof(data), + "attempting to read the PUBEK"); + if (err) + goto out; + + str += sprintf(str, "PUBEK:"); + for (i = 0 ; i < 65 ; i++) { + if ((i) % 16 == 0) + str += sprintf(str, "\n"); + str += sprintf(str, "%02X ", data[i+10]); + } + + str += sprintf(str, "\n"); +out: + rc = str - buf; + return rc; +} +EXPORT_SYMBOL_GPL(tcm_show_pubek); + +#define CAP_VERSION_1_1 6 +#define CAP_VERSION_1_2 0x1A +#define CAP_VERSION_IDX 13 +static const u8 cap_version[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 18, /* length */ + 0, 0, 128, 101, /* TCM_ORD_GetCapability */ + 0, 0, 0, 0, + 0, 0, 0, 0 +}; + +ssize_t tcm_show_caps(struct device *dev, struct device_attribute *attr, + char *buf) +{ + u8 data[max_t(int, max(ARRAY_SIZE(tcm_cap), ARRAY_SIZE(cap_version)), 30)]; + ssize_t rc = 0; + char *str = buf; + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + memcpy(data, tcm_cap, sizeof(tcm_cap)); + data[TCM_CAP_IDX] = TCM_CAP_PROP; + data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_MANUFACTURER; + + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the manufacturer"); + if (rc) + return 0; + + str += sprintf(str, "Manufacturer: 0x%x\n", + be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_1_IDX)))); + + memcpy(data, cap_version, sizeof(cap_version)); + data[CAP_VERSION_IDX] = CAP_VERSION_1_1; + rc = transmit_cmd(chip, data, sizeof(data), + "attempting to determine the 1.1 version"); + if (rc) + goto out; + + str += sprintf(str, "Firmware version: %02X.%02X.%02X.%02X\n", + (int)data[14], (int)data[15], (int)data[16], + (int)data[17]); + +out: + return str - buf; +} +EXPORT_SYMBOL_GPL(tcm_show_caps); + +ssize_t tcm_store_cancel(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return 0; + + chip->vendor.cancel(chip); + return count; +} +EXPORT_SYMBOL_GPL(tcm_store_cancel); + +/* + * Device file system interface to the TCM + * when App call file open in usr space ,this func will respone + */ +int tcm_open(struct inode *inode, struct file *file) +{ + int rc = 0, minor = iminor(inode); + struct tcm_chip *chip = NULL, *pos = NULL; + + spin_lock(&driver_lock); + + list_for_each_entry(pos, &tcm_chip_list, list) { + if (pos->vendor.miscdev.minor == minor) { + chip = pos; + break; + } + } + + if (chip == NULL) { + rc = -ENODEV; + goto err_out; + } + + if (chip->num_opens) { + dev_dbg(chip->dev, "Another process owns this TCM\n"); + rc = -EBUSY; + goto err_out; + } + + chip->num_opens++; + get_device(chip->dev); + + spin_unlock(&driver_lock); + + chip->data_buffer = kmalloc(TCM_BUFSIZE * sizeof(u8), GFP_KERNEL); + if (chip->data_buffer == NULL) { + chip->num_opens--; + put_device(chip->dev); + return -ENOMEM; + } + + atomic_set(&chip->data_pending, 0); + + file->private_data = chip; + return 0; + +err_out: + spin_unlock(&driver_lock); + return rc; +} +EXPORT_SYMBOL_GPL(tcm_open); + +int tcm_release(struct inode *inode, struct file *file) +{ + struct tcm_chip *chip = file->private_data; + + spin_lock(&driver_lock); + file->private_data = NULL; + chip->num_opens--; + del_singleshot_timer_sync(&chip->user_read_timer); + flush_work(&chip->work); + atomic_set(&chip->data_pending, 0); + put_device(chip->dev); + kfree(chip->data_buffer); + spin_unlock(&driver_lock); + return 0; +} +EXPORT_SYMBOL_GPL(tcm_release); + +ssize_t tcm_write(struct file *file, const char __user *buf, + size_t size, loff_t *off) +{ + struct tcm_chip *chip = file->private_data; + int in_size = size, out_size; + + /* + * cannot perform a write until the read has cleared + * either via tcm_read or a user_read_timer timeout + */ + while (atomic_read(&chip->data_pending) != 0) + msleep(TCM_TIMEOUT); + + mutex_lock(&chip->buffer_mutex); + + if (in_size > TCM_BUFSIZE) + in_size = TCM_BUFSIZE; + + if (copy_from_user(chip->data_buffer, (void __user *)buf, in_size)) { + mutex_unlock(&chip->buffer_mutex); + return -EFAULT; + } + + /* atomic tcm command send and result receive */ + out_size = tcm_transmit(chip, chip->data_buffer, TCM_BUFSIZE); + + if (out_size >= 0) { + atomic_set(&chip->data_pending, out_size); + mutex_unlock(&chip->buffer_mutex); + + /* Set a timeout by which the reader must come claim the result */ + mod_timer(&chip->user_read_timer, jiffies + (60 * HZ)); + } else + mutex_unlock(&chip->buffer_mutex); + + return in_size; +} +EXPORT_SYMBOL_GPL(tcm_write); + +ssize_t tcm_read(struct file *file, char __user *buf, + size_t size, loff_t *off) +{ + struct tcm_chip *chip = file->private_data; + int ret_size = 0; + + del_singleshot_timer_sync(&chip->user_read_timer); + flush_work(&chip->work); + ret_size = atomic_read(&chip->data_pending); + atomic_set(&chip->data_pending, 0); + if (ret_size > 0) { /* relay data */ + if (size < ret_size) + ret_size = size; + + mutex_lock(&chip->buffer_mutex); + if (copy_to_user(buf, chip->data_buffer, ret_size)) + ret_size = -EFAULT; + mutex_unlock(&chip->buffer_mutex); + } + + return ret_size; +} +EXPORT_SYMBOL_GPL(tcm_read); + +void tcm_remove_hardware(struct device *dev) +{ + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) { + dev_err(dev, "No device data found\n"); + return; + } + + spin_lock(&driver_lock); + list_del(&chip->list); + spin_unlock(&driver_lock); + + dev_set_drvdata(dev, NULL); + misc_deregister(&chip->vendor.miscdev); + kfree(chip->vendor.miscdev.name); + + sysfs_remove_group(&dev->kobj, chip->vendor.attr_group); + /* tcm_bios_log_teardown(chip->bios_dir); */ + + clear_bit(chip->dev_num, dev_mask); + kfree(chip); + put_device(dev); +} +EXPORT_SYMBOL_GPL(tcm_remove_hardware); + +static u8 savestate[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 10, /* blob length (in bytes) */ + 0, 0, 128, 152 /* TCM_ORD_SaveState */ +}; + +/* + * We are about to suspend. Save the TCM state + * so that it can be restored. + */ +int tcm_pm_suspend(struct device *dev, pm_message_t pm_state) +{ + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + tcm_transmit(chip, savestate, sizeof(savestate)); + return 0; +} +EXPORT_SYMBOL_GPL(tcm_pm_suspend); + +int tcm_pm_suspend_p(struct device *dev) +{ + struct tcm_chip *chip = dev_get_drvdata(dev); + + if (chip == NULL) + return -ENODEV; + + tcm_transmit(chip, savestate, sizeof(savestate)); + return 0; +} +EXPORT_SYMBOL_GPL(tcm_pm_suspend_p); + +void tcm_startup(struct tcm_chip *chip) +{ + u8 start_up[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 12, /* blob length (in bytes) */ + 0, 0, 128, 153, /* TCM_ORD_SaveState */ + 0, 1 + }; + if (chip == NULL) + return; + tcm_transmit(chip, start_up, sizeof(start_up)); +} +EXPORT_SYMBOL_GPL(tcm_startup); + +/* + * Resume from a power safe. The BIOS already restored + * the TCM state. + */ +int tcm_pm_resume(struct device *dev) +{ + u8 start_up[] = { + 0, 193, /* TCM_TAG_RQU_COMMAND */ + 0, 0, 0, 12, /* blob length (in bytes) */ + 0, 0, 128, 153, /* TCM_ORD_SaveState */ + 0, 1 + }; + struct tcm_chip *chip = dev_get_drvdata(dev); + /* dev_info(chip->dev ,"--call tcm_pm_resume\n"); */ + if (chip == NULL) + return -ENODEV; + + tcm_transmit(chip, start_up, sizeof(start_up)); + return 0; +} +EXPORT_SYMBOL_GPL(tcm_pm_resume); + +/* + * Called from tcm_<specific>.c probe function only for devices + * the driver has determined it should claim. Prior to calling + * this function the specific probe function has called pci_enable_device + * upon errant exit from this function specific probe function should call + * pci_disable_device + */ +struct tcm_chip *tcm_register_hardware(struct device *dev, + const struct tcm_vendor_specific *entry) +{ + int rc; +#define DEVNAME_SIZE 7 + + char *devname = NULL; + struct tcm_chip *chip = NULL; + + /* Driver specific per-device data */ + chip = kzalloc(sizeof(*chip), GFP_KERNEL); + if (chip == NULL) { + dev_err(dev, "chip kzalloc err\n"); + return NULL; + } + + mutex_init(&chip->buffer_mutex); + mutex_init(&chip->tcm_mutex); + INIT_LIST_HEAD(&chip->list); + + INIT_WORK(&chip->work, timeout_work); + timer_setup(&chip->user_read_timer, user_reader_timeout, 0); + + memcpy(&chip->vendor, entry, sizeof(struct tcm_vendor_specific)); + + chip->dev_num = find_first_zero_bit(dev_mask, TCM_NUM_DEVICES); + + if (chip->dev_num >= TCM_NUM_DEVICES) { + dev_err(dev, "No available tcm device numbers\n"); + kfree(chip); + return NULL; + } else if (chip->dev_num == 0) + chip->vendor.miscdev.minor = TCM_MINOR; + else + chip->vendor.miscdev.minor = MISC_DYNAMIC_MINOR; + + set_bit(chip->dev_num, dev_mask); + + devname = kmalloc(DEVNAME_SIZE, GFP_KERNEL); + scnprintf(devname, DEVNAME_SIZE, "%s%d", "tcm", chip->dev_num); + chip->vendor.miscdev.name = devname; + + /* chip->vendor.miscdev.dev = dev; */ + + chip->dev = get_device(dev); + + if (misc_register(&chip->vendor.miscdev)) { + dev_err(chip->dev, + "unable to misc_register %s, minor %d\n", + chip->vendor.miscdev.name, + chip->vendor.miscdev.minor); + put_device(dev); + clear_bit(chip->dev_num, dev_mask); + kfree(chip); + kfree(devname); + return NULL; + } + + spin_lock(&driver_lock); + dev_set_drvdata(dev, chip); + list_add(&chip->list, &tcm_chip_list); + spin_unlock(&driver_lock); + + rc = sysfs_create_group(&dev->kobj, chip->vendor.attr_group); + /* chip->bios_dir = tcm_bios_log_setup(devname); */ + + return chip; +} +EXPORT_SYMBOL_GPL(tcm_register_hardware); + +static int __init tcm_init_module(void) +{ + return 0; +} + +static void __exit tcm_exit_module(void) +{ +} + +module_init(tcm_init_module); +module_exit(tcm_exit_module); + +MODULE_AUTHOR("Nationz Technologies Inc."); +MODULE_DESCRIPTION("TCM Driver"); +MODULE_VERSION("1.1.1.0"); +MODULE_LICENSE("GPL"); diff --git a/drivers/staging/gmjstcm/tcm.h b/drivers/staging/gmjstcm/tcm.h new file mode 100644 index 000000000000..40cd0a879c3a --- /dev/null +++ b/drivers/staging/gmjstcm/tcm.h @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2009 Nationz Technologies Inc. + * + */ +#include <linux/module.h> +#include <linux/types.h> +#include <linux/pci.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/miscdevice.h> +#include <linux/platform_device.h> +#include <linux/io.h> + +struct device; +struct tcm_chip; + +enum tcm_timeout { + TCM_TIMEOUT = 5, +}; + +/* TCM addresses */ +enum tcm_addr { + TCM_SUPERIO_ADDR = 0x2E, + TCM_ADDR = 0x4E, +}; + +extern ssize_t tcm_show_pubek(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_show_pcrs(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_show_caps(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_store_cancel(struct device *, struct device_attribute *attr, + const char *, size_t); +extern ssize_t tcm_show_enabled(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_show_active(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_show_owned(struct device *, struct device_attribute *attr, + char *); +extern ssize_t tcm_show_temp_deactivated(struct device *, + struct device_attribute *attr, char *); + +struct tcm_vendor_specific { + const u8 req_complete_mask; + const u8 req_complete_val; + const u8 req_canceled; + void __iomem *iobase; /* ioremapped address */ + void __iomem *iolbc; + unsigned long base; /* TCM base address */ + + int irq; + + int region_size; + int have_region; + + int (*recv)(struct tcm_chip *, u8 *, size_t); + int (*send)(struct tcm_chip *, u8 *, size_t); + void (*cancel)(struct tcm_chip *); + u8 (*status)(struct tcm_chip *); + struct miscdevice miscdev; + struct attribute_group *attr_group; + struct list_head list; + int locality; + unsigned long timeout_a, timeout_b, timeout_c, timeout_d; /* jiffies */ + unsigned long duration[3]; /* jiffies */ + + wait_queue_head_t read_queue; + wait_queue_head_t int_queue; +}; + +struct tcm_chip { + struct device *dev; /* Device stuff */ + + int dev_num; /* /dev/tcm# */ + int num_opens; /* only one allowed */ + int time_expired; + + /* Data passed to and from the tcm via the read/write calls */ + u8 *data_buffer; + atomic_t data_pending; + struct mutex buffer_mutex; + + struct timer_list user_read_timer; /* user needs to claim result */ + struct work_struct work; + struct mutex tcm_mutex; /* tcm is processing */ + + struct tcm_vendor_specific vendor; + + struct dentry **bios_dir; + + struct list_head list; +}; + +#define to_tcm_chip(n) container_of(n, struct tcm_chip, vendor) + +static inline int tcm_read_index(int base, int index) +{ + outb(index, base); + return inb(base+1) & 0xFF; +} + +static inline void tcm_write_index(int base, int index, int value) +{ + outb(index, base); + outb(value & 0xFF, base+1); +} +extern void tcm_startup(struct tcm_chip *); +extern void tcm_get_timeouts(struct tcm_chip *); +extern unsigned long tcm_calc_ordinal_duration(struct tcm_chip *, u32); +extern struct tcm_chip *tcm_register_hardware(struct device *, + const struct tcm_vendor_specific *); +extern int tcm_open(struct inode *, struct file *); +extern int tcm_release(struct inode *, struct file *); +extern ssize_t tcm_write(struct file *, const char __user *, size_t, + loff_t *); +extern ssize_t tcm_read(struct file *, char __user *, size_t, loff_t *); +extern void tcm_remove_hardware(struct device *); +extern int tcm_pm_suspend(struct device *, pm_message_t); +extern int tcm_pm_suspend_p(struct device *); +extern int tcm_pm_resume(struct device *); diff --git a/drivers/staging/gmjstcm/tcm_tis_spi.c b/drivers/staging/gmjstcm/tcm_tis_spi.c new file mode 100644 index 000000000000..e29c0c1d54c6 --- /dev/null +++ b/drivers/staging/gmjstcm/tcm_tis_spi.c @@ -0,0 +1,847 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020 Kylin Tech. Co., Ltd. + */ + +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/moduleparam.h> +#include <linux/interrupt.h> +#include <linux/wait.h> +#include <linux/acpi.h> +#include <linux/spi/spi.h> + +#include "tcm.h" + +static int is_ft_all(void) { + return 0; +} + +#define TCM_HEADER_SIZE 10 + +static bool tcm_debug; +module_param_named(debug, tcm_debug, bool, 0600); +MODULE_PARM_DESC(debug, "Turn TCM debugging mode on and off"); + +#define tcm_dbg(fmt, args...) \ +{ \ + if (tcm_debug) \ + pr_err(fmt, ## args); \ +} + +enum tis_access { + TCM_ACCESS_VALID = 0x80, + TCM_ACCESS_ACTIVE_LOCALITY = 0x20, + TCM_ACCESS_REQUEST_PENDING = 0x04, + TCM_ACCESS_REQUEST_USE = 0x02, +}; + +enum tis_status { + TCM_STS_VALID = 0x80, + TCM_STS_COMMAND_READY = 0x40, + TCM_STS_GO = 0x20, + TCM_STS_DATA_AVAIL = 0x10, + TCM_STS_DATA_EXPECT = 0x08, +}; + +enum tis_int_flags { + TCM_GLOBAL_INT_ENABLE = 0x80000000, + TCM_INTF_BURST_COUNT_STATIC = 0x100, + TCM_INTF_CMD_READY_INT = 0x080, + TCM_INTF_INT_EDGE_FALLING = 0x040, + TCM_INTF_INT_EDGE_RISING = 0x020, + TCM_INTF_INT_LEVEL_LOW = 0x010, + TCM_INTF_INT_LEVEL_HIGH = 0x008, + TCM_INTF_LOCALITY_CHANGE_INT = 0x004, + TCM_INTF_STS_VALID_INT = 0x002, + TCM_INTF_DATA_AVAIL_INT = 0x001, +}; + +enum tis_defaults { + TIS_SHORT_TIMEOUT = 750, /* ms */ + TIS_LONG_TIMEOUT = 2000, /* 2 sec */ +}; + +#define TCM_ACCESS(l) (0x0000 | ((l) << 12)) +#define TCM_INT_ENABLE(l) (0x0008 | ((l) << 12)) /* interperet */ +#define TCM_INT_VECTOR(l) (0x000C | ((l) << 12)) +#define TCM_INT_STATUS(l) (0x0010 | ((l) << 12)) +#define TCM_INTF_CAPS(l) (0x0014 | ((l) << 12)) +#define TCM_STS(l) (0x0018 | ((l) << 12)) +#define TCM_DATA_FIFO(l) (0x0024 | ((l) << 12)) + +#define TCM_DID_VID(l) (0x0F00 | ((l) << 12)) +#define TCM_RID(l) (0x0F04 | ((l) << 12)) + +#define TIS_MEM_BASE_huawei 0x3fed40000LL + +#define MAX_SPI_FRAMESIZE 64 + +// +#define _CPU_FT2000A4 +#define REUSE_CONF_REG_BASE 0x28180208 +#define REUSE_GPIO1_A5_BASE 0x28005000 + +static void *__iomem reuse_conf_reg; +static void *__iomem gpio1_a5; + +// +static LIST_HEAD(tis_chips); +static DEFINE_SPINLOCK(tis_lock); + +struct chip_data { + u8 cs; + u8 tmode; + u8 type; + u8 poll_mode; + u16 clk_div; + u32 speed_hz; + void (*cs_control)(u32 command); +}; + +struct tcm_tis_spi_phy { + struct spi_device *spi_device; + struct completion ready; + u8 *iobuf; +}; + +int tcm_tis_spi_transfer(struct device *dev, u32 addr, u16 len, + u8 *in, const u8 *out) +{ + struct tcm_tis_spi_phy *phy = dev_get_drvdata(dev); + int ret = 0; + struct spi_message m; + struct spi_transfer spi_xfer; + u8 transfer_len; + + tcm_dbg("TCM-dbg: %s, addr: 0x%x, len: %x, %s\n", + __func__, addr, len, (in) ? "in" : "out"); + + spi_bus_lock(phy->spi_device->master); + + /* set gpio1_a5 to LOW */ + if (is_ft_all() && (phy->spi_device->chip_select == 0)) { + iowrite32(0x0, gpio1_a5); + } + + while (len) { + transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE); + + phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1); + phy->iobuf[1] = 0xd4; + phy->iobuf[2] = addr >> 8; + phy->iobuf[3] = addr; + + memset(&spi_xfer, 0, sizeof(spi_xfer)); + spi_xfer.tx_buf = phy->iobuf; + spi_xfer.rx_buf = phy->iobuf; + spi_xfer.len = 4; + spi_xfer.cs_change = 1; + + spi_message_init(&m); + spi_message_add_tail(&spi_xfer, &m); + ret = spi_sync_locked(phy->spi_device, &m); + if (ret < 0) + goto exit; + + spi_xfer.cs_change = 0; + spi_xfer.len = transfer_len; + spi_xfer.delay_usecs = 5; + + if (in) { + spi_xfer.tx_buf = NULL; + } else if (out) { + spi_xfer.rx_buf = NULL; + memcpy(phy->iobuf, out, transfer_len); + out += transfer_len; + } + + spi_message_init(&m); + spi_message_add_tail(&spi_xfer, &m); + reinit_completion(&phy->ready); + ret = spi_sync_locked(phy->spi_device, &m); + if (ret < 0) + goto exit; + + if (in) { + memcpy(in, phy->iobuf, transfer_len); + in += transfer_len; + } + + len -= transfer_len; + } + +exit: + /* set gpio1_a5 to HIGH */ + if (is_ft_all() && (phy->spi_device->chip_select == 0)) { + iowrite32(0x20, gpio1_a5); + } + + spi_bus_unlock(phy->spi_device->master); + tcm_dbg("TCM-dbg: ret: %d\n", ret); + return ret; +} + +static int tcm_tis_read8(struct device *dev, + u32 addr, u16 len, u8 *result) +{ + return tcm_tis_spi_transfer(dev, addr, len, result, NULL); +} + +static int tcm_tis_write8(struct device *dev, + u32 addr, u16 len, u8 *value) +{ + return tcm_tis_spi_transfer(dev, addr, len, NULL, value); +} + +static int tcm_tis_readb(struct device *dev, u32 addr, u8 *value) +{ + return tcm_tis_read8(dev, addr, sizeof(u8), value); +} + +static int tcm_tis_writeb(struct device *dev, u32 addr, u8 value) +{ + return tcm_tis_write8(dev, addr, sizeof(u8), &value); +} + +static int tcm_tis_readl(struct device *dev, u32 addr, u32 *result) +{ + int rc; + __le32 result_le; + + rc = tcm_tis_read8(dev, addr, sizeof(u32), (u8 *)&result_le); + tcm_dbg("TCM-dbg: result_le: 0x%x\n", result_le); + if (!rc) + *result = le32_to_cpu(result_le); + + return rc; +} + +static int tcm_tis_writel(struct device *dev, u32 addr, u32 value) +{ + int rc; + __le32 value_le; + + value_le = cpu_to_le32(value); + rc = tcm_tis_write8(dev, addr, sizeof(u32), (u8 *)&value_le); + + return rc; +} + +static int request_locality(struct tcm_chip *chip, int l); +static void release_locality(struct tcm_chip *chip, int l, int force); +static void cleanup_tis(void) +{ + int ret; + u32 inten; + struct tcm_vendor_specific *i, *j; + struct tcm_chip *chip; + + spin_lock(&tis_lock); + list_for_each_entry_safe(i, j, &tis_chips, list) { + chip = to_tcm_chip(i); + ret = tcm_tis_readl(chip->dev, + TCM_INT_ENABLE(chip->vendor.locality), &inten); + if (ret < 0) + return; + + tcm_tis_writel(chip->dev, TCM_INT_ENABLE(chip->vendor.locality), + ~TCM_GLOBAL_INT_ENABLE & inten); + release_locality(chip, chip->vendor.locality, 1); + } + spin_unlock(&tis_lock); +} + +static void tcm_tis_init(struct tcm_chip *chip) +{ + int ret; + u8 rid; + u32 vendor, intfcaps; + + ret = tcm_tis_readl(chip->dev, TCM_DID_VID(0), &vendor); + + if ((vendor & 0xffff) != 0x19f5 && (vendor & 0xffff) != 0x1B4E) + pr_info("there is no Nationz TCM on you computer\n"); + + ret = tcm_tis_readb(chip->dev, TCM_RID(0), &rid); + if (ret < 0) + return; + pr_info("kylin: 2019-09-21 1.2 TCM (device-id 0x%X, rev-id %d)\n", + vendor >> 16, rid); + + /* Figure out the capabilities */ + ret = tcm_tis_readl(chip->dev, + TCM_INTF_CAPS(chip->vendor.locality), &intfcaps); + if (ret < 0) + return; + + if (request_locality(chip, 0) != 0) + pr_err("tcm request_locality err\n"); + + atomic_set(&chip->data_pending, 0); +} + +static void tcm_handle_err(struct tcm_chip *chip) +{ + cleanup_tis(); + tcm_tis_init(chip); +} + +static bool check_locality(struct tcm_chip *chip, int l) +{ + int ret; + u8 access; + + ret = tcm_tis_readb(chip->dev, TCM_ACCESS(l), &access); + tcm_dbg("TCM-dbg: access: 0x%x\n", access); + if (ret < 0) + return false; + + if ((access & (TCM_ACCESS_ACTIVE_LOCALITY | TCM_ACCESS_VALID)) == + (TCM_ACCESS_ACTIVE_LOCALITY | TCM_ACCESS_VALID)) { + chip->vendor.locality = l; + return true; + } + + return false; +} + +static int request_locality(struct tcm_chip *chip, int l) +{ + unsigned long stop; + + if (check_locality(chip, l)) + return l; + + tcm_tis_writeb(chip->dev, TCM_ACCESS(l), TCM_ACCESS_REQUEST_USE); + + /* wait for burstcount */ + stop = jiffies + chip->vendor.timeout_a; + do { + if (check_locality(chip, l)) + return l; + msleep(TCM_TIMEOUT); + } while (time_before(jiffies, stop)); + + return -1; +} + +static void release_locality(struct tcm_chip *chip, int l, int force) +{ + int ret; + u8 access; + + ret = tcm_tis_readb(chip->dev, TCM_ACCESS(l), &access); + if (ret < 0) + return; + if (force || (access & (TCM_ACCESS_REQUEST_PENDING | TCM_ACCESS_VALID)) == + (TCM_ACCESS_REQUEST_PENDING | TCM_ACCESS_VALID)) + tcm_tis_writeb(chip->dev, + TCM_ACCESS(l), TCM_ACCESS_ACTIVE_LOCALITY); +} + +static u8 tcm_tis_status(struct tcm_chip *chip) +{ + int ret; + u8 status; + + ret = tcm_tis_readb(chip->dev, + TCM_STS(chip->vendor.locality), &status); + tcm_dbg("TCM-dbg: status: 0x%x\n", status); + if (ret < 0) + return 0; + + return status; +} + +static void tcm_tis_ready(struct tcm_chip *chip) +{ + /* this causes the current command to be aboreted */ + tcm_tis_writeb(chip->dev, TCM_STS(chip->vendor.locality), + TCM_STS_COMMAND_READY); +} + +static int get_burstcount(struct tcm_chip *chip) +{ + int ret; + unsigned long stop; + u8 tmp, tmp1; + int burstcnt = 0; + + /* wait for burstcount */ + /* which timeout value, spec has 2 answers (c & d) */ + stop = jiffies + chip->vendor.timeout_d; + do { + ret = tcm_tis_readb(chip->dev, + TCM_STS(chip->vendor.locality) + 1, + &tmp); + tcm_dbg("TCM-dbg: burstcnt: 0x%x\n", burstcnt); + if (ret < 0) + return -EINVAL; + ret = tcm_tis_readb(chip->dev, + (TCM_STS(chip->vendor.locality) + 2), + &tmp1); + tcm_dbg("TCM-dbg: burstcnt: 0x%x\n", burstcnt); + if (ret < 0) + return -EINVAL; + + burstcnt = tmp | (tmp1 << 8); + if (burstcnt) + return burstcnt; + msleep(TCM_TIMEOUT); + } while (time_before(jiffies, stop)); + + return -EBUSY; +} + +static int wait_for_stat(struct tcm_chip *chip, u8 mask, + unsigned long timeout, + wait_queue_head_t *queue) +{ + unsigned long stop; + u8 status; + + /* check current status */ + status = tcm_tis_status(chip); + if ((status & mask) == mask) + return 0; + + stop = jiffies + timeout; + do { + msleep(TCM_TIMEOUT); + status = tcm_tis_status(chip); + if ((status & mask) == mask) + return 0; + } while (time_before(jiffies, stop)); + + return -ETIME; +} + +static int recv_data(struct tcm_chip *chip, u8 *buf, size_t count) +{ + int ret; + int size = 0, burstcnt; + + while (size < count && wait_for_stat(chip, + TCM_STS_DATA_AVAIL | TCM_STS_VALID, + chip->vendor.timeout_c, + &chip->vendor.read_queue) == 0) { + burstcnt = get_burstcount(chip); + + if (burstcnt < 0) { + dev_err(chip->dev, "Unable to read burstcount\n"); + return burstcnt; + } + + for (; burstcnt > 0 && size < count; burstcnt--) { + ret = tcm_tis_readb(chip->dev, + TCM_DATA_FIFO(chip->vendor.locality), + &buf[size]); + tcm_dbg("TCM-dbg: buf[%d]: 0x%x\n", size, buf[size]); + size++; + } + } + + return size; +} + +static int tcm_tis_recv(struct tcm_chip *chip, u8 *buf, size_t count) +{ + int size = 0; + int expected, status; + unsigned long stop; + + if (count < TCM_HEADER_SIZE) { + dev_err(chip->dev, "read size is to small: %d\n", (u32)(count)); + size = -EIO; + goto out; + } + + /* read first 10 bytes, including tag, paramsize, and result */ + size = recv_data(chip, buf, TCM_HEADER_SIZE); + if (size < TCM_HEADER_SIZE) { + dev_err(chip->dev, "Unable to read header\n"); + goto out; + } + + expected = be32_to_cpu(*(__be32 *)(buf + 2)); + if (expected > count) { + dev_err(chip->dev, "Expected data count\n"); + size = -EIO; + goto out; + } + + size += recv_data(chip, &buf[TCM_HEADER_SIZE], + expected - TCM_HEADER_SIZE); + if (size < expected) { + dev_err(chip->dev, "Unable to read remainder of result\n"); + size = -ETIME; + goto out; + } + + wait_for_stat(chip, TCM_STS_VALID, chip->vendor.timeout_c, + &chip->vendor.int_queue); + + stop = jiffies + chip->vendor.timeout_c; + do { + msleep(TCM_TIMEOUT); + status = tcm_tis_status(chip); + if ((status & TCM_STS_DATA_AVAIL) == 0) + break; + + } while (time_before(jiffies, stop)); + + status = tcm_tis_status(chip); + if (status & TCM_STS_DATA_AVAIL) { /* retry? */ + dev_err(chip->dev, "Error left over data\n"); + size = -EIO; + goto out; + } + +out: + tcm_tis_ready(chip); + release_locality(chip, chip->vendor.locality, 0); + if (size < 0) + tcm_handle_err(chip); + return size; +} + +/* + * If interrupts are used (signaled by an irq set in the vendor structure) + * tcm.c can skip polling for the data to be available as the interrupt is + * waited for here + */ +static int tcm_tis_send(struct tcm_chip *chip, u8 *buf, size_t len) +{ + int rc, status, burstcnt; + size_t count = 0; + u32 ordinal; + unsigned long stop; + int send_again = 0; + +tcm_tis_send_again: + count = 0; + if (request_locality(chip, 0) < 0) { + dev_err(chip->dev, "send, tcm is busy\n"); + return -EBUSY; + } + status = tcm_tis_status(chip); + + if ((status & TCM_STS_COMMAND_READY) == 0) { + tcm_tis_ready(chip); + if (wait_for_stat(chip, TCM_STS_COMMAND_READY, + chip->vendor.timeout_b, &chip->vendor.int_queue) < 0) { + dev_err(chip->dev, "send, tcm wait time out1\n"); + rc = -ETIME; + goto out_err; + } + } + + while (count < len - 1) { + burstcnt = get_burstcount(chip); + if (burstcnt < 0) { + dev_err(chip->dev, "Unable to read burstcount\n"); + rc = burstcnt; + goto out_err; + } + for (; burstcnt > 0 && count < len - 1; burstcnt--) { + tcm_tis_writeb(chip->dev, + TCM_DATA_FIFO(chip->vendor.locality), buf[count]); + count++; + } + + wait_for_stat(chip, TCM_STS_VALID, chip->vendor.timeout_c, + &chip->vendor.int_queue); + } + + /* write last byte */ + tcm_tis_writeb(chip->dev, + TCM_DATA_FIFO(chip->vendor.locality), buf[count]); + + wait_for_stat(chip, TCM_STS_VALID, + chip->vendor.timeout_c, &chip->vendor.int_queue); + stop = jiffies + chip->vendor.timeout_c; + do { + msleep(TCM_TIMEOUT); + status = tcm_tis_status(chip); + if ((status & TCM_STS_DATA_EXPECT) == 0) + break; + + } while (time_before(jiffies, stop)); + + if ((status & TCM_STS_DATA_EXPECT) != 0) { + dev_err(chip->dev, "send, tcm expect data\n"); + rc = -EIO; + goto out_err; + } + + /* go and do it */ + tcm_tis_writeb(chip->dev, TCM_STS(chip->vendor.locality), TCM_STS_GO); + + ordinal = be32_to_cpu(*((__be32 *)(buf + 6))); + if (wait_for_stat(chip, TCM_STS_DATA_AVAIL | TCM_STS_VALID, + tcm_calc_ordinal_duration(chip, ordinal), + &chip->vendor.read_queue) < 0) { + dev_err(chip->dev, "send, tcm wait time out2\n"); + rc = -ETIME; + goto out_err; + } + + return len; + +out_err: + tcm_tis_ready(chip); + release_locality(chip, chip->vendor.locality, 0); + tcm_handle_err(chip); + if (send_again++ < 3) { + goto tcm_tis_send_again; + } + + dev_err(chip->dev, "kylin send, err: %d\n", rc); + return rc; +} + +static struct file_operations tis_ops = { + .owner = THIS_MODULE, + .llseek = no_llseek, + .open = tcm_open, + .read = tcm_read, + .write = tcm_write, + .release = tcm_release, +}; + +static DEVICE_ATTR(pubek, S_IRUGO, tcm_show_pubek, NULL); +static DEVICE_ATTR(pcrs, S_IRUGO, tcm_show_pcrs, NULL); +static DEVICE_ATTR(enabled, S_IRUGO, tcm_show_enabled, NULL); +static DEVICE_ATTR(active, S_IRUGO, tcm_show_active, NULL); +static DEVICE_ATTR(owned, S_IRUGO, tcm_show_owned, NULL); +static DEVICE_ATTR(temp_deactivated, S_IRUGO, tcm_show_temp_deactivated, + NULL); +static DEVICE_ATTR(caps, S_IRUGO, tcm_show_caps, NULL); +static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tcm_store_cancel); + +static struct attribute *tis_attrs[] = { + &dev_attr_pubek.attr, + &dev_attr_pcrs.attr, + &dev_attr_enabled.attr, + &dev_attr_active.attr, + &dev_attr_owned.attr, + &dev_attr_temp_deactivated.attr, + &dev_attr_caps.attr, + &dev_attr_cancel.attr, NULL, +}; + +static struct attribute_group tis_attr_grp = { + .attrs = tis_attrs +}; + +static struct tcm_vendor_specific tcm_tis = { + .status = tcm_tis_status, + .recv = tcm_tis_recv, + .send = tcm_tis_send, + .cancel = tcm_tis_ready, + .req_complete_mask = TCM_STS_DATA_AVAIL | TCM_STS_VALID, + .req_complete_val = TCM_STS_DATA_AVAIL | TCM_STS_VALID, + .req_canceled = TCM_STS_COMMAND_READY, + .attr_group = &tis_attr_grp, + .miscdev = { + .fops = &tis_ops, + }, +}; + +static struct tcm_chip *chip; +static int tcm_tis_spi_probe(struct spi_device *spi) +{ + int ret; + u8 revid; + u32 vendor, intfcaps; + struct tcm_tis_spi_phy *phy; + struct chip_data *spi_chip; + + pr_info("TCM(ky): __func__(v=%d) ..\n", + 10); + + tcm_dbg("TCM-dbg: %s/%d, enter\n", __func__, __LINE__); + phy = devm_kzalloc(&spi->dev, sizeof(struct tcm_tis_spi_phy), + GFP_KERNEL); + if (!phy) + return -ENOMEM; + + phy->iobuf = devm_kmalloc(&spi->dev, MAX_SPI_FRAMESIZE, GFP_KERNEL); + if (!phy->iobuf) + return -ENOMEM; + + phy->spi_device = spi; + init_completion(&phy->ready); + + tcm_dbg("TCM-dbg: %s/%d\n", __func__, __LINE__); + /* init spi dev */ + spi->chip_select = 0; /* cs0 */ + spi->mode = SPI_MODE_0; + spi->bits_per_word = 8; + spi->max_speed_hz = spi->max_speed_hz ? : 24000000; + spi_setup(spi); + + spi_chip = spi_get_ctldata(spi); + if (!spi_chip) { + pr_err("There was wrong in spi master\n"); + return -ENODEV; + } + /* tcm does not support interrupt mode, we use poll mode instead. */ + spi_chip->poll_mode = 1; + + tcm_dbg("TCM-dbg: %s/%d\n", __func__, __LINE__); + /* regiter tcm hw */ + chip = tcm_register_hardware(&spi->dev, &tcm_tis); + if (!chip) { + dev_err(chip->dev, "tcm register hardware err\n"); + return -ENODEV; + } + + dev_set_drvdata(chip->dev, phy); + + /** + * phytium2000a4 spi controller's clk clk level is unstable, + * so it is solved by using the low level of gpio output. + **/ + if (is_ft_all() && (spi->chip_select == 0)) { + /* reuse conf reg base */ + reuse_conf_reg = ioremap(REUSE_CONF_REG_BASE, 0x10); + if (!reuse_conf_reg) { + dev_err(&spi->dev, "Failed to ioremap reuse conf reg\n"); + ret = -ENOMEM; + goto out_err; + } + + /* gpio1 a5 base addr */ + gpio1_a5 = ioremap(REUSE_GPIO1_A5_BASE, 0x10); + if (!gpio1_a5) { + dev_err(&spi->dev, "Failed to ioremap gpio1 a5\n"); + ret = -ENOMEM; + goto out_err; + } + + /* reuse cs0 to gpio1_a5 */ + iowrite32((ioread32(reuse_conf_reg) | 0xFFFF0) & 0xFFF9004F, + reuse_conf_reg); + /* set gpio1 a5 to output */ + iowrite32(0x20, gpio1_a5 + 0x4); + } + + tcm_dbg("TCM-dbg: %s/%d\n", + __func__, __LINE__); + ret = tcm_tis_readl(chip->dev, TCM_DID_VID(0), &vendor); + if (ret < 0) + goto out_err; + + tcm_dbg("TCM-dbg: %s/%d, vendor: 0x%x\n", + __func__, __LINE__, vendor); + if ((vendor & 0xffff) != 0x19f5 && (vendor & 0xffff) != 0x1B4E) { + dev_err(chip->dev, "there is no Nationz TCM on you computer\n"); + goto out_err; + } + + ret = tcm_tis_readb(chip->dev, TCM_RID(0), &revid); + tcm_dbg("TCM-dbg: %s/%d, revid: 0x%x\n", + __func__, __LINE__, revid); + if (ret < 0) + goto out_err; + dev_info(chip->dev, "kylin: 2019-09-21 1.2 TCM " + "(device-id 0x%X, rev-id %d)\n", + vendor >> 16, revid); + + /* Default timeouts */ + chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT); + chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT); + chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT); + chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT); + + tcm_dbg("TCM-dbg: %s/%d\n", + __func__, __LINE__); + /* Figure out the capabilities */ + ret = tcm_tis_readl(chip->dev, + TCM_INTF_CAPS(chip->vendor.locality), &intfcaps); + if (ret < 0) + goto out_err; + + tcm_dbg("TCM-dbg: %s/%d, intfcaps: 0x%x\n", + __func__, __LINE__, intfcaps); + if (request_locality(chip, 0) != 0) { + dev_err(chip->dev, "tcm request_locality err\n"); + ret = -ENODEV; + goto out_err; + } + + INIT_LIST_HEAD(&chip->vendor.list); + spin_lock(&tis_lock); + list_add(&chip->vendor.list, &tis_chips); + spin_unlock(&tis_lock); + + tcm_get_timeouts(chip); + tcm_startup(chip); + + tcm_dbg("TCM-dbg: %s/%d, exit\n", __func__, __LINE__); + return 0; + +out_err: + if (is_ft_all()) { + if (reuse_conf_reg) + iounmap(reuse_conf_reg); + if (gpio1_a5) + iounmap(gpio1_a5); + } + tcm_dbg("TCM-dbg: %s/%d, error\n", __func__, __LINE__); + dev_set_drvdata(chip->dev, chip); + tcm_remove_hardware(chip->dev); + + return ret; +} + +static int tcm_tis_spi_remove(struct spi_device *dev) +{ + if (is_ft_all()) { + if (reuse_conf_reg) + iounmap(reuse_conf_reg); + if (gpio1_a5) + iounmap(gpio1_a5); + } + + dev_info(&dev->dev, "%s\n", __func__); + dev_set_drvdata(chip->dev, chip); + tcm_remove_hardware(&dev->dev); + + return 0; +} + +static const struct acpi_device_id tcm_tis_spi_acpi_match[] = { + {"TCMS0001", 0}, + {"SMO0768", 0}, + {"ZIC0601", 0}, + {} +}; +MODULE_DEVICE_TABLE(acpi, tcm_tis_spi_acpi_match); + +static const struct spi_device_id tcm_tis_spi_id_table[] = { + {"SMO0768", 0}, + {"ZIC0601", 0}, + {} +}; +MODULE_DEVICE_TABLE(spi, tcm_tis_spi_id_table); + +static struct spi_driver tcm_tis_spi_drv = { + .driver = { + .name = "tcm_tis_spi", + .acpi_match_table = ACPI_PTR(tcm_tis_spi_acpi_match), + }, + .id_table = tcm_tis_spi_id_table, + .probe = tcm_tis_spi_probe, + .remove = tcm_tis_spi_remove, +}; + +module_spi_driver(tcm_tis_spi_drv); + +MODULE_AUTHOR("xiongxin<xiongxin(a)tj.kylinos.cn>"); +MODULE_DESCRIPTION("TCM Driver Base Spi"); +MODULE_VERSION("6.1.0.2"); +MODULE_LICENSE("GPL");
From: Zhen Lei thunder.leizhen@huawei.com
driver inclusion category: feature bugzilla: 50797 CVE: NA
------------------------------------------------------------------------- Set CONFIG_GMJS_TCM_CORE=m and CONFIG_GMJS_TCM_SPI=m, so that its ko can be distributed in ISO on arm64.
Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/arm64/configs/openeuler_defconfig | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index f93571a479d0..98e08f2488cd 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -5970,3 +5970,6 @@ CONFIG_IO_STRICT_DEVMEM=y CONFIG_SMMU_BYPASS_DEV=y CONFIG_ETMEM_SCAN=m CONFIG_ETMEM_SWAP=m +CONFIG_STAGING=y +CONFIG_GMJS_TCM_CORE=m +CONFIG_GMJS_TCM_SPI=m
From: Zhang Ming 154842638@qq.com
openEuler inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I3D58V CVE: NA
----------------------------------
No unlock operation is performed on the mpam_devices_lock before the return statement, which may lead to a deadlock.
Signed-off-by: Zhang Ming 154842638@qq.com Reported-by: Cheng Jian cj.chengjian@huawei.com Suggested-by: Cheng Jian cj.chengjian@huawei.com Reviewed-by: Wang ShaoBo bobo.shaobowang@huawei.com Reviewed-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/mpam/mpam_device.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c index fc7aa1ae0b82..f8840274b902 100644 --- a/arch/arm64/kernel/mpam/mpam_device.c +++ b/arch/arm64/kernel/mpam/mpam_device.c @@ -560,8 +560,10 @@ static void __init mpam_enable(struct work_struct *work) mutex_lock(&mpam_devices_lock); mpam_enable_squash_features(); err = mpam_allocate_config(); - if (err) + if (err) { + mutex_unlock(&mpam_devices_lock); return; + } mutex_unlock(&mpam_devices_lock);
mpam_enable_irqs();
From: Lu Jialin lujialin4@huawei.com
hulk inclusion category: feature/cgroups bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=7 CVE: NA
--------
The patch adds ifndef(__GENKSYMS__) into cgroup_subsys.h, and this change is more nasty than it looks. It hides the fact that that we change the layout of "struct cgroup" and "struct css_set", they both have the subsys[CGROUP_SUBSYS_COUNT] member. I hope this is fine, the modular code has no reasons to access the private members after ->subsys[], and the helpers like cgroup_sane_behavior() shouldn't be used by external modules.
The patch also fixes the following compile warning caused by the fix kabi broken.
Signed-off-by: Lu Jialin lujialin4@huawei.com Reviewed-by: Chen Zhou chenzhou10@huawei.com Reviewed-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- include/linux/cgroup_subsys.h | 2 ++ kernel/cgroup/cgroup.c | 6 ++++++ 2 files changed, 8 insertions(+)
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 716e79baa796..e1dfab82f354 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -68,9 +68,11 @@ SUBSYS(rdma) SUBSYS(debug) #endif
+#if ((!defined __GENKSYMS__) || (!defined CONFIG_X86)) #if IS_ENABLED(CONFIG_CGROUP_FILES) SUBSYS(files) #endif +#endif
/* * DO NOT ADD ANY SUBSYSTEM WITHOUT EXPLICIT ACKS FROM CGROUP MAINTAINERS. diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index e38da89fad66..79fa4e90499a 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -134,6 +134,12 @@ static const char *cgroup_subsys_name[] = { EXPORT_SYMBOL_GPL(_x ## _cgrp_subsys_enabled_key); \ EXPORT_SYMBOL_GPL(_x ## _cgrp_subsys_on_dfl_key); #include <linux/cgroup_subsys.h> + +#if ((defined __GENKSYMS__) && (defined CONFIG_X86)) +#if IS_ENABLED(CONFIG_CGROUP_FILES) +SUBSYS(files) +#endif +#endif #undef SUBSYS
#define SUBSYS(_x) [_x ## _cgrp_id] = &_x ## _cgrp_subsys_enabled_key,
From: Dave Airlie airlied@redhat.com
stable inclusion from linux-4.19.140 commit 10c8a526b2db1fcdf9e2d59d4885377b91939c55 CVE: CVE-2021-20292
--------------------------------
commit 5de5b6ecf97a021f29403aa272cb4e03318ef586 upstream.
This is confusing, and from my reading of all the drivers only nouveau got this right.
Just make the API act under driver control of it's own allocation failing, and don't call destroy, if the page table fails to create there is nothing to cleanup here.
(I'm willing to believe I've missed something here, so please review deeply).
Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Dave Airlie airlied@redhat.com Link: https://patchwork.freedesktop.org/patch/msgid/20200728041736.20689-1-airlied... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/gpu/drm/nouveau/nouveau_sgdma.c | 9 +++------ drivers/gpu/drm/ttm/ttm_tt.c | 3 --- 2 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c index 8ebdc74cc0ad..326948b65542 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c @@ -96,12 +96,9 @@ nouveau_sgdma_create_ttm(struct ttm_buffer_object *bo, uint32_t page_flags) else nvbe->ttm.ttm.func = &nv50_sgdma_backend;
- if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags)) - /* - * A failing ttm_dma_tt_init() will call ttm_tt_destroy() - * and thus our nouveau_sgdma_destroy() hook, so we don't need - * to free nvbe here. - */ + if (ttm_dma_tt_init(&nvbe->ttm, bo, page_flags)) { + kfree(nvbe); return NULL; + } return &nvbe->ttm.ttm; } diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index e3a0691582ff..68cfa25674e5 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -241,7 +241,6 @@ int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, ttm_tt_init_fields(ttm, bo, page_flags);
if (ttm_tt_alloc_page_directory(ttm)) { - ttm_tt_destroy(ttm); pr_err("Failed allocating page table\n"); return -ENOMEM; } @@ -265,7 +264,6 @@ int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo,
INIT_LIST_HEAD(&ttm_dma->pages_list); if (ttm_dma_tt_alloc_page_directory(ttm_dma)) { - ttm_tt_destroy(ttm); pr_err("Failed allocating page table\n"); return -ENOMEM; } @@ -287,7 +285,6 @@ int ttm_sg_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo, else ret = ttm_dma_tt_alloc_page_directory(ttm_dma); if (ret) { - ttm_tt_destroy(ttm); pr_err("Failed allocating page table\n"); return -ENOMEM; }
From: Dan Carpenter dan.carpenter@oracle.com
stable inclusion from linux-4.19.162 commit 304c080fc33258e3b177b6f0736b97d54e6fea3b CVE: CVE-2020-35519
--------------------------------
[ Upstream commit 6ee50c8e262a0f0693dad264c3c99e30e6442a56 ]
The .x25_addr[] address comes from the user and is not necessarily NUL terminated. This leads to a couple problems. The first problem is that the strlen() in x25_bind() can read beyond the end of the buffer.
The second problem is more subtle and could result in memory corruption. The call tree is: x25_connect() --> x25_write_internal() --> x25_addr_aton()
The .x25_addr[] buffers are copied to the "addresses" buffer from x25_write_internal() so it will lead to stack corruption.
Verify that the strings are NUL terminated and return -EINVAL if they are not.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Fixes: a9288525d2ae ("X25: Dont let x25_bind use addresses containing characters") Reported-by: "kiyin(尹亮)" kiyin@tencent.com Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Acked-by: Martin Schiller ms@dev.tdt.de Link: https://lore.kernel.org/r/X8ZeAKm8FnFpN//B@mwanda Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Yue Haibing yuehaibing@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- net/x25/af_x25.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c index 20a511398389..86971df0baf6 100644 --- a/net/x25/af_x25.c +++ b/net/x25/af_x25.c @@ -680,7 +680,8 @@ static int x25_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) int len, i, rc = 0;
if (addr_len != sizeof(struct sockaddr_x25) || - addr->sx25_family != AF_X25) { + addr->sx25_family != AF_X25 || + strnlen(addr->sx25_addr.x25_addr, X25_ADDR_LEN) == X25_ADDR_LEN) { rc = -EINVAL; goto out; } @@ -770,7 +771,8 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
rc = -EINVAL; if (addr_len != sizeof(struct sockaddr_x25) || - addr->sx25_family != AF_X25) + addr->sx25_family != AF_X25 || + strnlen(addr->sx25_addr.x25_addr, X25_ADDR_LEN) == X25_ADDR_LEN) goto out;
rc = -ENETUNREACH;
From: Piotr Krysiuk piotras@gmail.com
stable inclusion from linux-4.19.182 commit bc49612a0e2c379a0d997375901c5371ba015518 CVE: CVE-2020-27170
--------------------------------
commit f232326f6966cf2a1d1db7bc917a4ce5f9f55f76 upstream.
The purpose of this patch is to streamline error propagation and in particular to propagate retrieve_ptr_limit() errors for pointer types that are not defining a ptr_limit such that register-based alu ops against these types can be rejected.
The main rationale is that a gap has been identified by Piotr in the existing protection against speculatively out-of-bounds loads, for example, in case of ctx pointers, unprivileged programs can still perform pointer arithmetic. This can be abused to execute speculatively out-of-bounds loads without restrictions and thus extract contents of kernel memory.
Fix this by rejecting unprivileged programs that attempt any pointer arithmetic on unprotected pointer types. The two affected ones are pointer to ctx as well as pointer to map. Field access to a modified ctx' pointer is rejected at a later point in time in the verifier, and 7c6967326267 ("bpf: Permit map_ptr arithmetic with opcode add and offset 0") only relevant for root-only use cases. Risk of unprivileged program breakage is considered very low.
Fixes: 7c6967326267 ("bpf: Permit map_ptr arithmetic with opcode add and offset 0") Fixes: b2157399cc98 ("bpf: prevent out-of-bounds speculation") Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- kernel/bpf/verifier.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 44a2dec812cf..a0f7b46aeed3 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2804,6 +2804,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env, u32 alu_state, alu_limit; struct bpf_reg_state tmp; bool ret; + int err;
if (can_skip_alu_sanitation(env, insn)) return 0; @@ -2819,10 +2820,13 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env, alu_state |= ptr_is_dst_reg ? BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
- if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg)) - return 0; - if (update_alu_sanitation_state(aux, alu_state, alu_limit)) - return -EACCES; + err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg); + if (err < 0) + return err; + + err = update_alu_sanitation_state(aux, alu_state, alu_limit); + if (err < 0) + return err; do_sim: /* Simulate and find potential out-of-bounds access under * speculative execution from truncation as a result of @@ -2920,7 +2924,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, case BPF_ADD: ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); if (ret < 0) { - verbose(env, "R%d tried to add from different maps or paths\n", dst); + verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst); return ret; } /* We can take a fixed offset as long as it doesn't overflow @@ -2975,7 +2979,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, case BPF_SUB: ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); if (ret < 0) { - verbose(env, "R%d tried to sub from different maps or paths\n", dst); + verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst); return ret; } if (dst_reg == off_reg) {
From: Piotr Krysiuk piotras@gmail.com
stable inclusion from linux-4.19.182 commit ec5307f2ed2377fc55f0a8c990c6004c63014a54 CVE: CVE-2020-27171
--------------------------------
commit 10d2bb2e6b1d8c4576c56a748f697dbeb8388899 upstream.
retrieve_ptr_limit() computes the ptr_limit for registers with stack and map_value type. ptr_limit is the size of the memory area that is still valid / in-bounds from the point of the current position and direction of the operation (add / sub). This size will later be used for masking the operation such that attempting out-of-bounds access in the speculative domain is redirected to remain within the bounds of the current map value.
When masking to the right the size is correct, however, when masking to the left, the size is off-by-one which would lead to an incorrect mask and thus incorrect arithmetic operation in the non-speculative domain. Piotr found that if the resulting alu_limit value is zero, then the BPF_MOV32_IMM() from the fixup_bpf_calls() rewrite will end up loading 0xffffffff into AX instead of sign-extending to the full 64 bit range, and as a result, this allows abuse for executing speculatively out-of- bounds loads against 4GB window of address space and thus extracting the contents of kernel memory via side-channel.
Fixes: 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer arithmetic") Signed-off-by: Piotr Krysiuk piotras@gmail.com Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- kernel/bpf/verifier.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a0f7b46aeed3..7369a704bfae 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2740,13 +2740,13 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg, case PTR_TO_STACK: off = ptr_reg->off + ptr_reg->var_off.value; if (mask_to_left) - *ptr_limit = MAX_BPF_STACK + off; + *ptr_limit = MAX_BPF_STACK + off + 1; else *ptr_limit = -off; return 0; case PTR_TO_MAP_VALUE: if (mask_to_left) { - *ptr_limit = ptr_reg->umax_value + ptr_reg->off; + *ptr_limit = ptr_reg->umax_value + ptr_reg->off + 1; } else { off = ptr_reg->smin_value + ptr_reg->off; *ptr_limit = ptr_reg->map_ptr->value_size - off;
From: Filipe Manana fdmanana@suse.com
stable inclusion from linux-4.19.183 commit 12dc6889bcff1bc2921a1587afca55ca4091b73e CVE: CVE-2021-28964
--------------------------------
commit dbcc7d57bffc0c8cac9dac11bec548597d59a6a5 upstream.
While resolving backreferences, as part of a logical ino ioctl call or fiemap, we can end up hitting a BUG_ON() when replaying tree mod log operations of a root, triggering a stack trace like the following:
------------[ cut here ]------------ kernel BUG at fs/btrfs/ctree.c:1210! invalid opcode: 0000 [#1] SMP KASAN PTI CPU: 1 PID: 19054 Comm: crawl_335 Tainted: G W 5.11.0-2d11c0084b02-misc-next+ #89 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 RIP: 0010:__tree_mod_log_rewind+0x3b1/0x3c0 Code: 05 48 8d 74 10 (...) RSP: 0018:ffffc90001eb70b8 EFLAGS: 00010297 RAX: 0000000000000000 RBX: ffff88812344e400 RCX: ffffffffb28933b6 RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffff88812344e42c RBP: ffffc90001eb7108 R08: 1ffff11020b60a20 R09: ffffed1020b60a20 R10: ffff888105b050f9 R11: ffffed1020b60a1f R12: 00000000000000ee R13: ffff8880195520c0 R14: ffff8881bc958500 R15: ffff88812344e42c FS: 00007fd1955e8700(0000) GS:ffff8881f5600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007efdb7928718 CR3: 000000010103a006 CR4: 0000000000170ee0 Call Trace: btrfs_search_old_slot+0x265/0x10d0 ? lock_acquired+0xbb/0x600 ? btrfs_search_slot+0x1090/0x1090 ? free_extent_buffer.part.61+0xd7/0x140 ? free_extent_buffer+0x13/0x20 resolve_indirect_refs+0x3e9/0xfc0 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? add_prelim_ref.part.11+0x150/0x150 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? lock_acquired+0xbb/0x600 ? __kasan_check_write+0x14/0x20 ? do_raw_spin_unlock+0xa8/0x140 ? rb_insert_color+0x30/0x360 ? prelim_ref_insert+0x12d/0x430 find_parent_nodes+0x5c3/0x1830 ? resolve_indirect_refs+0xfc0/0xfc0 ? lock_release+0xc8/0x620 ? fs_reclaim_acquire+0x67/0xf0 ? lock_acquire+0xc7/0x510 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x160/0x210 ? lock_release+0xc8/0x620 ? fs_reclaim_acquire+0x67/0xf0 ? lock_acquire+0xc7/0x510 ? poison_range+0x38/0x40 ? unpoison_range+0x14/0x40 ? trace_hardirqs_on+0x55/0x120 btrfs_find_all_roots_safe+0x142/0x1e0 ? find_parent_nodes+0x1830/0x1830 ? btrfs_inode_flags_to_xflags+0x50/0x50 iterate_extent_inodes+0x20e/0x580 ? tree_backref_for_extent+0x230/0x230 ? lock_downgrade+0x3d0/0x3d0 ? read_extent_buffer+0xdd/0x110 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? lock_acquired+0xbb/0x600 ? __kasan_check_write+0x14/0x20 ? _raw_spin_unlock+0x22/0x30 ? __kasan_check_write+0x14/0x20 iterate_inodes_from_logical+0x129/0x170 ? iterate_inodes_from_logical+0x129/0x170 ? btrfs_inode_flags_to_xflags+0x50/0x50 ? iterate_extent_inodes+0x580/0x580 ? __vmalloc_node+0x92/0xb0 ? init_data_container+0x34/0xb0 ? init_data_container+0x34/0xb0 ? kvmalloc_node+0x60/0x80 btrfs_ioctl_logical_to_ino+0x158/0x230 btrfs_ioctl+0x205e/0x4040 ? __might_sleep+0x71/0xe0 ? btrfs_ioctl_get_supported_features+0x30/0x30 ? getrusage+0x4b6/0x9c0 ? __kasan_check_read+0x11/0x20 ? lock_release+0xc8/0x620 ? __might_fault+0x64/0xd0 ? lock_acquire+0xc7/0x510 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? __kasan_check_read+0x11/0x20 ? do_vfs_ioctl+0xfc/0x9d0 ? ioctl_file_clone+0xe0/0xe0 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? __kasan_check_read+0x11/0x20 ? lock_release+0xc8/0x620 ? __task_pid_nr_ns+0xd3/0x250 ? lock_acquire+0xc7/0x510 ? __fget_files+0x160/0x230 ? __fget_light+0xf2/0x110 __x64_sys_ioctl+0xc3/0x100 do_syscall_64+0x37/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7fd1976e2427 Code: 00 00 90 48 8b 05 (...) RSP: 002b:00007fd1955e5cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fd1955e5f40 RCX: 00007fd1976e2427 RDX: 00007fd1955e5f48 RSI: 00000000c038943b RDI: 0000000000000004 RBP: 0000000001000000 R08: 0000000000000000 R09: 00007fd1955e6120 R10: 0000557835366b00 R11: 0000000000000246 R12: 0000000000000004 R13: 00007fd1955e5f48 R14: 00007fd1955e5f40 R15: 00007fd1955e5ef8 Modules linked in: ---[ end trace ec8931a1c36e57be ]---
(gdb) l *(__tree_mod_log_rewind+0x3b1) 0xffffffff81893521 is in __tree_mod_log_rewind (fs/btrfs/ctree.c:1210). 1205 * the modification. as we're going backwards, we do the 1206 * opposite of each operation here. 1207 */ 1208 switch (tm->op) { 1209 case MOD_LOG_KEY_REMOVE_WHILE_FREEING: 1210 BUG_ON(tm->slot < n); 1211 fallthrough; 1212 case MOD_LOG_KEY_REMOVE_WHILE_MOVING: 1213 case MOD_LOG_KEY_REMOVE: 1214 btrfs_set_node_key(eb, &tm->key, tm->slot);
Here's what happens to hit that BUG_ON():
1) We have one tree mod log user (through fiemap or the logical ino ioctl), with a sequence number of 1, so we have fs_info->tree_mod_seq == 1;
2) Another task is at ctree.c:balance_level() and we have eb X currently as the root of the tree, and we promote its single child, eb Y, as the new root.
Then, at ctree.c:balance_level(), we call:
tree_mod_log_insert_root(eb X, eb Y, 1);
3) At tree_mod_log_insert_root() we create tree mod log elements for each slot of eb X, of operation type MOD_LOG_KEY_REMOVE_WHILE_FREEING each with a ->logical pointing to ebX->start. These are placed in an array named tm_list. Lets assume there are N elements (N pointers in eb X);
4) Then, still at tree_mod_log_insert_root(), we create a tree mod log element of operation type MOD_LOG_ROOT_REPLACE, ->logical set to ebY->start, ->old_root.logical set to ebX->start, ->old_root.level set to the level of eb X and ->generation set to the generation of eb X;
5) Then tree_mod_log_insert_root() calls tree_mod_log_free_eb() with tm_list as argument. After that, tree_mod_log_free_eb() calls __tree_mod_log_insert() for each member of tm_list in reverse order, from highest slot in eb X, slot N - 1, to slot 0 of eb X;
6) __tree_mod_log_insert() sets the sequence number of each given tree mod log operation - it increments fs_info->tree_mod_seq and sets fs_info->tree_mod_seq as the sequence number of the given tree mod log operation.
This means that for the tm_list created at tree_mod_log_insert_root(), the element corresponding to slot 0 of eb X has the highest sequence number (1 + N), and the element corresponding to the last slot has the lowest sequence number (2);
7) Then, after inserting tm_list's elements into the tree mod log rbtree, the MOD_LOG_ROOT_REPLACE element is inserted, which gets the highest sequence number, which is N + 2;
8) Back to ctree.c:balance_level(), we free eb X by calling btrfs_free_tree_block() on it. Because eb X was created in the current transaction, has no other references and writeback did not happen for it, we add it back to the free space cache/tree;
9) Later some other task T allocates the metadata extent from eb X, since it is marked as free space in the space cache/tree, and uses it as a node for some other btree;
10) The tree mod log user task calls btrfs_search_old_slot(), which calls get_old_root(), and finally that calls __tree_mod_log_oldest_root() with time_seq == 1 and eb_root == eb Y;
11) First iteration of the while loop finds the tree mod log element with sequence number N + 2, for the logical address of eb Y and of type MOD_LOG_ROOT_REPLACE;
12) Because the operation type is MOD_LOG_ROOT_REPLACE, we don't break out of the loop, and set root_logical to point to tm->old_root.logical which corresponds to the logical address of eb X;
13) On the next iteration of the while loop, the call to tree_mod_log_search_oldest() returns the smallest tree mod log element for the logical address of eb X, which has a sequence number of 2, an operation type of MOD_LOG_KEY_REMOVE_WHILE_FREEING and corresponds to the old slot N - 1 of eb X (eb X had N items in it before being freed);
14) We then break out of the while loop and return the tree mod log operation of type MOD_LOG_ROOT_REPLACE (eb Y), and not the one for slot N - 1 of eb X, to get_old_root();
15) At get_old_root(), we process the MOD_LOG_ROOT_REPLACE operation and set "logical" to the logical address of eb X, which was the old root. We then call tree_mod_log_search() passing it the logical address of eb X and time_seq == 1;
16) Then before calling tree_mod_log_search(), task T adds a key to eb X, which results in adding a tree mod log operation of type MOD_LOG_KEY_ADD to the tree mod log - this is done at ctree.c:insert_ptr() - but after adding the tree mod log operation and before updating the number of items in eb X from 0 to 1...
17) The task at get_old_root() calls tree_mod_log_search() and gets the tree mod log operation of type MOD_LOG_KEY_ADD just added by task T. Then it enters the following if branch:
if (old_root && tm && tm->op != MOD_LOG_KEY_REMOVE_WHILE_FREEING) { (...) } (...)
Calls read_tree_block() for eb X, which gets a reference on eb X but does not lock it - task T has it locked. Then it clones eb X while it has nritems set to 0 in its header, before task T sets nritems to 1 in eb X's header. From hereupon we use the clone of eb X which no other task has access to;
18) Then we call __tree_mod_log_rewind(), passing it the MOD_LOG_KEY_ADD mod log operation we just got from tree_mod_log_search() in the previous step and the cloned version of eb X;
19) At __tree_mod_log_rewind(), we set the local variable "n" to the number of items set in eb X's clone, which is 0. Then we enter the while loop, and in its first iteration we process the MOD_LOG_KEY_ADD operation, which just decrements "n" from 0 to (u32)-1, since "n" is declared with a type of u32. At the end of this iteration we call rb_next() to find the next tree mod log operation for eb X, that gives us the mod log operation of type MOD_LOG_KEY_REMOVE_WHILE_FREEING, for slot 0, with a sequence number of N + 1 (steps 3 to 6);
20) Then we go back to the top of the while loop and trigger the following BUG_ON():
(...) switch (tm->op) { case MOD_LOG_KEY_REMOVE_WHILE_FREEING: BUG_ON(tm->slot < n); fallthrough; (...)
Because "n" has a value of (u32)-1 (4294967295) and tm->slot is 0.
Fix this by taking a read lock on the extent buffer before cloning it at ctree.c:get_old_root(). This should be done regardless of the extent buffer having been freed and reused, as a concurrent task might be modifying it (while holding a write lock on it).
Reported-by: Zygo Blaxell ce3g8jdj@umail.furryterror.org Link: https://lore.kernel.org/linux-btrfs/20210227155037.GN28049@hungrycats.org/ Fixes: 834328a8493079 ("Btrfs: tree mod log's old roots could still be part of the tree") CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- fs/btrfs/ctree.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 84ff398ae70b..718a08710687 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -1408,7 +1408,9 @@ get_old_root(struct btrfs_root *root, u64 time_seq) "failed to read tree block %llu from get_old_root", logical); } else { + btrfs_tree_read_lock(old); eb = btrfs_clone_extent_buffer(old); + btrfs_tree_read_unlock(old); free_extent_buffer(old); } } else if (old_root) {
From: Kan Liang kan.liang@linux.intel.com
stable inclusion from linux-4.19.183 commit b35214c541365c7dd7c9d5f44a02b0633a1cc83f CVE: CVE-2021-28971
--------------------------------
commit d88d05a9e0b6d9356e97129d4ff9942d765f46ea upstream.
A repeatable crash can be triggered by the perf_fuzzer on some Haswell system. https://lore.kernel.org/lkml/7170d3b-c17f-1ded-52aa-cc6d9ae999f4@maine.edu/
For some old CPUs (HSW and earlier), the PEBS status in a PEBS record may be mistakenly set to 0. To minimize the impact of the defect, the commit was introduced to try to avoid dropping the PEBS record for some cases. It adds a check in the intel_pmu_drain_pebs_nhm(), and updates the local pebs_status accordingly. However, it doesn't correct the PEBS status in the PEBS record, which may trigger the crash, especially for the large PEBS.
It's possible that all the PEBS records in a large PEBS have the PEBS status 0. If so, the first get_next_pebs_record_by_bit() in the __intel_pmu_pebs_event() returns NULL. The at = NULL. Since it's a large PEBS, the 'count' parameter must > 1. The second get_next_pebs_record_by_bit() will crash.
Besides the local pebs_status, correct the PEBS status in the PEBS record as well.
Fixes: 01330d7288e0 ("perf/x86: Allow zero PEBS status with only single active event") Reported-by: Vince Weaver vincent.weaver@maine.edu Suggested-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1615555298-140216-1-git-send-email-kan.liang@linux... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/events/intel/ds.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index e91814d1a27f..ad57183333f6 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1555,7 +1555,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) */ if (!pebs_status && cpuc->pebs_enabled && !(cpuc->pebs_enabled & (cpuc->pebs_enabled-1))) - pebs_status = cpuc->pebs_enabled; + pebs_status = p->status = cpuc->pebs_enabled;
bit = find_first_bit((unsigned long *)&pebs_status, x86_pmu.max_pebs_events);
From: Tyrel Datwyler tyreld@linux.ibm.com
stable inclusion from linux-4.19.183 commit f27a00f0d5b0646a52633e98f5fc3ef719004dcd CVE: CVE-2021-28972
--------------------------------
commit cc7a0bb058b85ea03db87169c60c7cfdd5d34678 upstream.
Both add_slot_store() and remove_slot_store() try to fix up the drc_name copied from the store buffer by placing a NUL terminator at nbyte + 1 or in place of a '\n' if present. However, the static buffer that we copy the drc_name data into is not zeroed and can contain anything past the n-th byte.
This is problematic if a '\n' byte appears in that buffer after nbytes and the string copied into the store buffer was not NUL terminated to start with as the strchr() search for a '\n' byte will mark this incorrectly as the end of the drc_name string resulting in a drc_name string that contains garbage data after the n-th byte.
Additionally it will cause us to overwrite that '\n' byte on the stack with NUL, potentially corrupting data on the stack.
The following debugging shows an example of the drmgr utility writing "PHB 4543" to the add_slot sysfs attribute, but add_slot_store() logging a corrupted string value.
drmgr: drmgr: -c phb -a -s PHB 4543 -d 1 add_slot_store: drc_name = PHB 4543°|<82>!, rc = -19
Fix this by using strscpy() instead of memcpy() to ensure the string is NUL terminated when copied into the static drc_name buffer. Further, since the string is now NUL terminated the code only needs to change '\n' to '\0' when present.
Cc: stable@vger.kernel.org Signed-off-by: Tyrel Datwyler tyreld@linux.ibm.com [mpe: Reformat change log and add mention of possible stack corruption] Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20210315214821.452959-1-tyreld@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/pci/hotplug/rpadlpar_sysfs.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/drivers/pci/hotplug/rpadlpar_sysfs.c b/drivers/pci/hotplug/rpadlpar_sysfs.c index cdbfa5df3a51..dbfa0b55d31a 100644 --- a/drivers/pci/hotplug/rpadlpar_sysfs.c +++ b/drivers/pci/hotplug/rpadlpar_sysfs.c @@ -34,12 +34,11 @@ static ssize_t add_slot_store(struct kobject *kobj, struct kobj_attribute *attr, if (nbytes >= MAX_DRC_NAME_LEN) return 0;
- memcpy(drc_name, buf, nbytes); + strscpy(drc_name, buf, nbytes + 1);
end = strchr(drc_name, '\n'); - if (!end) - end = &drc_name[nbytes]; - *end = '\0'; + if (end) + *end = '\0';
rc = dlpar_add_slot(drc_name); if (rc) @@ -65,12 +64,11 @@ static ssize_t remove_slot_store(struct kobject *kobj, if (nbytes >= MAX_DRC_NAME_LEN) return 0;
- memcpy(drc_name, buf, nbytes); + strscpy(drc_name, buf, nbytes + 1);
end = strchr(drc_name, '\n'); - if (!end) - end = &drc_name[nbytes]; - *end = '\0'; + if (end) + *end = '\0';
rc = dlpar_remove_slot(drc_name); if (rc)
From: Dan Carpenter dan.carpenter@oracle.com
stable inclusion from linux-4.19.181 commit eda4378094de16090d74eacea3d8c10f7719ed25 CVE: CVE-2021-28660
--------------------------------
commit 74b6b20df8cfe90ada777d621b54c32e69e27cd7 upstream.
This code has a check to prevent read overflow but it needs another check to prevent writing beyond the end of the ->ssid[] array.
Fixes: a2c60d42d97c ("staging: r8188eu: Add files for new driver - part 16") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Cc: stable stable@vger.kernel.org Link: https://lore.kernel.org/r/YEHymwsnHewzoam7@mwanda Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/staging/rtl8188eu/os_dep/ioctl_linux.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c index bee3c3a7a7a9..72791920f8a7 100644 --- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c +++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c @@ -1158,9 +1158,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a, break; } sec_len = *(pos++); len -= 1; - if (sec_len > 0 && sec_len <= len) { + if (sec_len > 0 && + sec_len <= len && + sec_len <= 32) { ssid[ssid_index].SsidLength = sec_len; - memcpy(ssid[ssid_index].Ssid, pos, ssid[ssid_index].SsidLength); + memcpy(ssid[ssid_index].Ssid, pos, sec_len); ssid_index++; } pos += sec_len;
From: Liu Shixin liushixin2@huawei.com
hulk inclusion category: bugfix bugzilla: 47240 CVE: NA
-------------------------------------------------
Patch a222f3415868 ("mm: generalize putback scan functions") has combined move_active_pages_to_lru() and putback_inactive_pages() into single move_pages_to_lru(). But we didn't backport this patch so move_active_pages_to_lru() is still existed. When We moved mem_cgroup_uncharge() in 7ae88534cdd9 ("mm: move mem_cgroup_uncharge out of __page_cache_release()"), move_active_pages_to_lru() should be changed together.
Fixes: 7ae88534cdd9 ("mm: move mem_cgroup_uncharge out of __page_cache_release()") Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- mm/vmscan.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c index cabf2c290be5..a8e563689164 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2067,7 +2067,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); - mem_cgroup_uncharge(page); (*get_compound_page_dtor(page))(page); spin_lock_irq(&pgdat->lru_lock); } else