Kernel
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 55 participants
- 16911 discussions

[PATCH kernel-4.19 31/57] ALSA: hda/realtek: call alc_update_headset_mode() in hp_automute_hook
by Yang Yingliang 15 Apr '21
by Yang Yingliang 15 Apr '21
15 Apr '21
From: Hui Wang <hui.wang(a)canonical.com>
commit e54f30befa7990b897189b44a56c1138c6bfdbb5 upstream.
We found the alc_update_headset_mode() is not called on some machines
when unplugging the headset, as a result, the mode of the
ALC_HEADSET_MODE_UNPLUGGED can't be set, then the current_headset_type
is not cleared, if users plug a differnt type of headset next time,
the determine_headset_type() will not be called and the audio jack is
set to the headset type of previous time.
On the Dell machines which connect the dmic to the PCH, if we open
the gnome-sound-setting and unplug the headset, this issue will
happen. Those machines disable the auto-mute by ucm and has no
internal mic in the input source, so the update_headset_mode() will
not be called by cap_sync_hook or automute_hook when unplugging, and
because the gnome-sound-setting is opened, the codec will not enter
the runtime_suspend state, so the update_headset_mode() will not be
called by alc_resume when unplugging. In this case the
hp_automute_hook is called when unplugging, so add
update_headset_mode() calling to this function.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang(a)canonical.com>
Link: https://lore.kernel.org/r/20210320091542.6748-2-hui.wang@canonical.com
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/patch_realtek.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index d94cfae2920f4..f456e5f67824c 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -5054,6 +5054,7 @@ static void alc_update_headset_jack_cb(struct hda_codec *codec,
struct alc_spec *spec = codec->spec;
spec->current_headset_type = ALC_HEADSET_TYPE_UNKNOWN;
snd_hda_gen_hp_automute(codec, jack);
+ alc_update_headset_mode(codec);
}
static void alc_probe_headset_mode(struct hda_codec *codec)
--
2.25.1
1
0

[PATCH kernel-4.19 30/57] ALSA: hda/realtek: fix a determine_headset_type issue for a Dell AIO
by Yang Yingliang 15 Apr '21
by Yang Yingliang 15 Apr '21
15 Apr '21
From: Hui Wang <hui.wang(a)canonical.com>
commit febf22565549ea7111e7d45e8f2d64373cc66b11 upstream.
We found a recording issue on a Dell AIO, users plug a headset-mic and
select headset-mic from UI, but can't record any sound from
headset-mic. The root cause is the determine_headset_type() returns a
wrong type, e.g. users plug a ctia type headset, but that function
returns omtp type.
On this machine, the internal mic is not connected to the codec, the
"Input Source" is headset mic by default. And when users plug a
headset, the determine_headset_type() will be called immediately, the
codec on this AIO is alc274, the delay time for this codec in the
determine_headset_type() is only 80ms, the delay is too short to
correctly determine the headset type, the fail rate is nearly 99% when
users plug the headset with the normal speed.
Other codecs set several hundred ms delay time, so here I change the
delay time to 850ms for alc2x4 series, after this change, the fail
rate is zero unless users plug the headset slowly on purpose.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang(a)canonical.com>
Link: https://lore.kernel.org/r/20210320091542.6748-1-hui.wang@canonical.com
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/patch_realtek.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 6a419b23adbc8..d94cfae2920f4 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -4870,7 +4870,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
case 0x10ec0274:
case 0x10ec0294:
alc_process_coef_fw(codec, coef0274);
- msleep(80);
+ msleep(850);
val = alc_read_coef_idx(codec, 0x46);
is_ctia = (val & 0x00f0) == 0x00f0;
break;
--
2.25.1
1
0

15 Apr '21
From: Jesper Dangaard Brouer <brouer(a)redhat.com>
commit 6306c1189e77a513bf02720450bb43bd4ba5d8ae upstream.
Multiple BPF-helpers that can manipulate/increase the size of the SKB uses
__bpf_skb_max_len() as the max-length. This function limit size against
the current net_device MTU (skb->dev->mtu).
When a BPF-prog grow the packet size, then it should not be limited to the
MTU. The MTU is a transmit limitation, and software receiving this packet
should be allowed to increase the size. Further more, current MTU check in
__bpf_skb_max_len uses the MTU from ingress/current net_device, which in
case of redirects uses the wrong net_device.
This patch keeps a sanity max limit of SKB_MAX_ALLOC (16KiB). The real limit
is elsewhere in the system. Jesper's testing[1] showed it was not possible
to exceed 8KiB when expanding the SKB size via BPF-helper. The limiting
factor is the define KMALLOC_MAX_CACHE_SIZE which is 8192 for
SLUB-allocator (CONFIG_SLUB) in-case PAGE_SIZE is 4096. This define is
in-effect due to this being called from softirq context see code
__gfp_pfmemalloc_flags() and __do_kmalloc_node(). Jakub's testing showed
that frames above 16KiB can cause NICs to reset (but not crash). Keep this
sanity limit at this level as memory layer can differ based on kernel
config.
[1] https://github.com/xdp-project/bpf-examples/tree/master/MTU-tests
Signed-off-by: Jesper Dangaard Brouer <brouer(a)redhat.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Link: https://lore.kernel.org/bpf/161287788936.790810.2937823995775097177.stgit@f…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/core/filter.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index a1077e879aa42..0aeb130aa0708 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2836,18 +2836,14 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 len_diff)
return 0;
}
-static u32 __bpf_skb_max_len(const struct sk_buff *skb)
-{
- return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
- SKB_MAX_ALLOC;
-}
+#define BPF_SKB_MAX_LEN SKB_MAX_ALLOC
static int bpf_skb_adjust_net(struct sk_buff *skb, s32 len_diff)
{
bool trans_same = skb->transport_header == skb->network_header;
u32 len_cur, len_diff_abs = abs(len_diff);
u32 len_min = bpf_skb_net_base_len(skb);
- u32 len_max = __bpf_skb_max_len(skb);
+ u32 len_max = BPF_SKB_MAX_LEN;
__be16 proto = skb_protocol(skb, true);
bool shrink = len_diff < 0;
int ret;
@@ -2926,7 +2922,7 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
static inline int __bpf_skb_change_tail(struct sk_buff *skb, u32 new_len,
u64 flags)
{
- u32 max_len = __bpf_skb_max_len(skb);
+ u32 max_len = BPF_SKB_MAX_LEN;
u32 min_len = __bpf_skb_min_len(skb);
int ret;
@@ -3002,7 +2998,7 @@ static const struct bpf_func_proto sk_skb_change_tail_proto = {
static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
u64 flags)
{
- u32 max_len = __bpf_skb_max_len(skb);
+ u32 max_len = BPF_SKB_MAX_LEN;
u32 new_len = skb->len + head_room;
int ret;
--
2.25.1
1
0

15 Apr '21
From: Lucas Tanure <tanureal(a)opensource.cirrus.com>
[ Upstream commit 2bdc4f5c6838f7c3feb4fe68e4edbeea158ec0a2 ]
Remove the hard coded 32 bits width and replace with the correct width
calculated by params_width.
Signed-off-by: Lucas Tanure <tanureal(a)opensource.cirrus.com>
Link: https://lore.kernel.org/r/20210305173442.195740-3-tanureal@opensource.cirru…
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
sound/soc/codecs/cs42l42.c | 47 ++++++++++++++++++--------------------
sound/soc/codecs/cs42l42.h | 1 -
2 files changed, 22 insertions(+), 26 deletions(-)
diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
index c7baa19bf3178..a5bd9cff70856 100644
--- a/sound/soc/codecs/cs42l42.c
+++ b/sound/soc/codecs/cs42l42.c
@@ -695,24 +695,6 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
CS42L42_CLK_OASRC_SEL_MASK,
CS42L42_CLK_OASRC_SEL_12 <<
CS42L42_CLK_OASRC_SEL_SHIFT);
- /* channel 1 on low LRCLK, 32 bit */
- snd_soc_component_update_bits(component,
- CS42L42_ASP_RX_DAI0_CH1_AP_RES,
- CS42L42_ASP_RX_CH_AP_MASK |
- CS42L42_ASP_RX_CH_RES_MASK,
- (CS42L42_ASP_RX_CH_AP_LOW <<
- CS42L42_ASP_RX_CH_AP_SHIFT) |
- (CS42L42_ASP_RX_CH_RES_32 <<
- CS42L42_ASP_RX_CH_RES_SHIFT));
- /* Channel 2 on high LRCLK, 32 bit */
- snd_soc_component_update_bits(component,
- CS42L42_ASP_RX_DAI0_CH2_AP_RES,
- CS42L42_ASP_RX_CH_AP_MASK |
- CS42L42_ASP_RX_CH_RES_MASK,
- (CS42L42_ASP_RX_CH_AP_HI <<
- CS42L42_ASP_RX_CH_AP_SHIFT) |
- (CS42L42_ASP_RX_CH_RES_32 <<
- CS42L42_ASP_RX_CH_RES_SHIFT));
if (pll_ratio_table[i].mclk_src_sel == 0) {
/* Pass the clock straight through */
snd_soc_component_update_bits(component,
@@ -828,14 +810,29 @@ static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream,
{
struct snd_soc_component *component = dai->component;
struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component);
- int retval;
+ unsigned int width = (params_width(params) / 8) - 1;
+ unsigned int val = 0;
cs42l42->srate = params_rate(params);
- cs42l42->swidth = params_width(params);
- retval = cs42l42_pll_config(component);
+ switch(substream->stream) {
+ case SNDRV_PCM_STREAM_PLAYBACK:
+ val |= width << CS42L42_ASP_RX_CH_RES_SHIFT;
+ /* channel 1 on low LRCLK */
+ snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH1_AP_RES,
+ CS42L42_ASP_RX_CH_AP_MASK |
+ CS42L42_ASP_RX_CH_RES_MASK, val);
+ /* Channel 2 on high LRCLK */
+ val |= CS42L42_ASP_RX_CH_AP_HI << CS42L42_ASP_RX_CH_AP_SHIFT;
+ snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH2_AP_RES,
+ CS42L42_ASP_RX_CH_AP_MASK |
+ CS42L42_ASP_RX_CH_RES_MASK, val);
+ break;
+ default:
+ break;
+ }
- return retval;
+ return cs42l42_pll_config(component);
}
static int cs42l42_set_sysclk(struct snd_soc_dai *dai,
@@ -900,9 +897,9 @@ static int cs42l42_digital_mute(struct snd_soc_dai *dai, int mute)
return 0;
}
-#define CS42L42_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S18_3LE | \
- SNDRV_PCM_FMTBIT_S20_3LE | SNDRV_PCM_FMTBIT_S24_LE | \
- SNDRV_PCM_FMTBIT_S32_LE)
+#define CS42L42_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\
+ SNDRV_PCM_FMTBIT_S24_LE |\
+ SNDRV_PCM_FMTBIT_S32_LE )
static const struct snd_soc_dai_ops cs42l42_ops = {
diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
index 9d04ed75e5c8f..23b1a63315cab 100644
--- a/sound/soc/codecs/cs42l42.h
+++ b/sound/soc/codecs/cs42l42.h
@@ -761,7 +761,6 @@ struct cs42l42_private {
struct completion pdn_done;
u32 sclk;
u32 srate;
- u32 swidth;
u8 plug_state;
u8 hs_type;
u8 ts_inv;
--
2.25.1
1
0
fix CVE-2021-29154
Piotr Krysiuk (2):
bpf, x86: Validate computation of branch displacements for x86-64
bpf, x86: Validate computation of branch displacements for x86-32
arch/x86/net/bpf_jit_comp.c | 11 ++++++++++-
arch/x86/net/bpf_jit_comp32.c | 11 ++++++++++-
2 files changed, 20 insertions(+), 2 deletions(-)
--
2.25.1
1
2
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 47439
CVE: NA
-------------------------------------------------
Enable CONFIG_USERSWAP for hulk_defconfg and openeuler_defconfig
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig
index b385033d6192..8a85b0cf7103 100644
--- a/arch/arm64/configs/hulk_defconfig
+++ b/arch/arm64/configs/hulk_defconfig
@@ -962,6 +962,7 @@ CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_SHRINK_PAGECACHE=y
+CONFIG_USERSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index d9a43adebc16..19b1f9bc31a3 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -955,6 +955,7 @@ CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_SHRINK_PAGECACHE=y
+CONFIG_USERSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] sched/fair: fix kabi broken due to adding idle_h_nr_running in cfs_rq
by Cheng Jian 14 Apr '21
by Cheng Jian 14 Apr '21
14 Apr '21
hulk inclusion
category: bugfix
bugzilla: 38260, https://bugzilla.openeuler.org/show_bug.cgi?id=22
CVE: NA
---------------------------
Commit 92010bb6d6b8 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
introduced kabi broken, just fix it.
Fixes: 92010bb6d6b8 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
---
kernel/sched/sched.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2e63aa145030..f8c29f1af2d0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -490,7 +490,6 @@ struct cfs_rq {
unsigned long runnable_weight;
unsigned int nr_running;
unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */
- unsigned int idle_h_nr_running; /* SCHED_IDLE */
u64 exec_clock;
u64 min_vruntime;
@@ -574,7 +573,15 @@ struct cfs_rq {
#endif /* CONFIG_CFS_BANDWIDTH */
#endif /* CONFIG_FAIR_GROUP_SCHED */
+#ifndef __GENKSYMS__
+ union {
+ unsigned int idle_h_nr_running; /* SCHED_IDLE */
+ unsigned long idle_h_nr_running_padding; /* SCHED_IDLE padding for KABI */
+ };
+#else
KABI_RESERVE(1)
+#endif
+
KABI_RESERVE(2)
};
--
2.25.1
1
1

14 Apr '21
update opeEuler 20.03 @ 20210414 step 1
Aleksandr Miloserdov (2):
scsi: target: core: Add cmd length set before cmd complete
scsi: target: core: Prevent underflow for service actions
Anna-Maria Behnsen (1):
hrtimer: Update softirq_expires_next correctly after
__hrtimer_get_next_event()
Arnd Bergmann (1):
mm/vmalloc.c: avoid bogus -Wmaybe-uninitialized warning
Cheng Jian (5):
sched/fair: Optimize select_idle_cpu
disable stealing by default
sched/fair: introduce SCHED_STEAL
config: enable CONFIG_SCHED_STEAL by default
sched/fair: fix try_steal compile error
Dan Carpenter (1):
ocfs2: fix a use after free on error
Daniel Borkmann (1):
net: Fix gro aggregation for udp encaps with zero csum
Daniel Kobras (1):
sunrpc: fix refcount leak for rpc auth modules
Daniel Wagner (2):
block: Use non _rcu version of list functions for tag_set_list
block: Suppress uevent for hidden device when removed
David Rientjes (1):
KVM: SVM: Periodically schedule when unregistering regions on destroy
Eric Biggers (1):
random: fix the RNDRESEEDCRNG ioctl
Eric Dumazet (6):
net: qrtr: fix a kernel-infoleak in qrtr_recvmsg()
tcp: fix SO_RCVLOWAT related hangs under mem pressure
ipv6: icmp6: avoid indirect call for icmpv6_send()
tcp: annotate tp->copied_seq lockless reads
tcp: annotate tp->write_seq lockless reads
tcp: add sanity tests to TCP_QUEUE_SEQ
Fangrui Song (1):
module: Ignore _GLOBAL_OFFSET_TABLE_ when warning for undefined
symbols
Florian Westphal (1):
netfilter: ctnetlink: fix dump of the expect mask attribute
Frank Sorenson (1):
NFS: Correct size calculation for create reply length
Geert Uytterhoeven (1):
PCI: Fix pci_register_io_range() memory leak
Guo Fan (2):
userswap: add a new flag 'MAP_REPLACE' for mmap()
userswap: support userswap via userfaultfd
Hillf Danton (1):
mm/gup: Let __get_user_pages_locked() return -EINTR for fatal signal
Jan Beulich (1):
xen-blkback: don't leak persistent grants from xen_blkbk_map()
Jan Kara (2):
bfq: Avoid false bfq queue merging
ext4: add reclaim checks to xattr code
Jason A. Donenfeld (6):
icmp: introduce helper for nat'd source address in network device
context
icmp: allow icmpv6_ndo_send to work with CONFIG_IPV6=n
gtp: use icmp_ndo_send helper
sunvnet: use icmp_ndo_send helper
xfrm: interface: use icmp_ndo_send helper
net: icmp: pass zeroed opts from icmp{,v6}_ndo_send before sending
Jeffle Xu (3):
dm table: fix iterate_devices based device capability checks
dm table: fix DAX iterate_devices based device capability checks
dm table: fix zoned iterate_devices based device capability checks
Jens Axboe (1):
swap: fix swapfile read/write offset
JeongHyeon Lee (1):
dm verity: add root hash pkcs#7 signature verification
Kefeng Wang (1):
mm: slub: Expanded the scope of corrupted freelist workaround
Kuppuswamy Sathyanarayanan (1):
mm/vmalloc.c: fix percpu free VM area search criteria
Leon Romanovsky (1):
ipv6: silence compilation warning for non-IPV6 builds
Li Xinhai (1):
mm/hugetlb.c: fix unnecessary address expansion of pmd sharing
Linus Torvalds (1):
Revert "mm, slub: consider rest of partial list if acquire_slab()
fails"
Marc Zyngier (1):
arm64: Add missing ISB after invalidating TLB in __primary_switch
Marco Elver (1):
net: fix up truesize of cloned skb in skb_prepare_for_shift()
Mark Tomlinson (3):
Revert "netfilter: x_tables: Switch synchronization to RCU"
netfilter: x_tables: Use correct memory barriers.
Revert "netfilter: x_tables: Update remaining dereference to RCU"
Matthew Wilcox (Oracle) (1):
include/linux/sched/mm.h: use rcu_dereference in in_vfork()
Miaohe Lin (3):
mm/memory.c: fix potential pte_unmap_unlock pte error
mm/hugetlb: fix potential double free in hugetlb_register_node() error
path
mm/rmap: fix potential pte_unmap on an not mapped pte
Michael Braun (1):
gianfar: fix jumbo packets+napi+rx overrun crash
Michal Hocko (1):
mm, mempolicy: fix up gup usage in lookup_node
Mike Kravetz (2):
hugetlb: fix copy_huge_page_from_user contig page struct assumption
hugetlb: fix update_and_free_page contig page struct assumption
Mikulas Patocka (4):
blk-settings: align max_sectors on "logical_block_size" boundary
dm: fix deadlock when swapping to encrypted device
dm bufio: subtract the number of initial sectors in
dm_bufio_get_device_size
dm ioctl: fix out of bounds array access when no devices
Ming Lei (1):
block: respect queue limit of max discard segment
Muchun Song (1):
printk: fix deadlock when kernel panic
NeilBrown (1):
x86: fix seq_file iteration for pat/memtype.c
Oleg Nesterov (1):
kernel, fs: Introduce and use set_restart_fn() and
arch_set_restart_data()
Pan Bian (1):
isofs: release buffer head before return
Paulo Alcantara (1):
cifs: return proper error code in statfs(2)
Pavel Tatashin (1):
arm64: kdump: update ppos when reading elfcorehdr
Peter Xu (4):
mm: allow VM_FAULT_RETRY for multiple times
mm/gup: allow VM_FAULT_RETRY for multiple times
mm/gup: fix fixup_user_fault() on multiple retries
mm/mempolicy: Allow lookup_node() to handle fatal signal
Peter Zijlstra (2):
jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked()
operations
locking/static_key: Fix false positive warnings on concurrent dec/inc
Rafael J. Wysocki (1):
ACPI: property: Fix fwnode string properties matching
Rustam Kovhaev (1):
KVM: fix memory leak in kvm_io_bus_unregister_dev()
Sagi Grimberg (1):
nvme-rdma: fix possible hang when failing to set io queues
Sakari Ailus (1):
media: v4l: ioctl: Fix memory leak in video_usercopy
Shaoying Xu (1):
arm64 module: set plt* section addresses to 0x0
Shuah Khan (1):
usbip: fix stub_dev usbip_sockfd_store() races leading to gpf
Steve Sistare (10):
sched: Provide sparsemask, a reduced contention bitmap
sched/topology: Provide hooks to allocate data shared per LLC
sched/topology: Provide cfs_overload_cpus bitmap
sched/fair: Dynamically update cfs_overload_cpus
sched/fair: Hoist idle_stamp up from idle_balance
sched/fair: Generalize the detach_task interface
sched/fair: Provide can_migrate_task_llc
sched/fair: Steal work from an overloaded CPU when CPU goes idle
sched/fair: disable stealing if too many NUMA nodes
sched/fair: Provide idle search schedstats
Steven Rostedt (VMware) (1):
tracepoint: Do not fail unregistering a probe due to memory failure
Thomas Gleixner (1):
locking/mutex: Fix non debug version of mutex_lock_io_nested()
Uladzislau Rezki (Sony) (3):
mm/vmalloc.c: keep track of free blocks for vmap allocation
mm/vmap: add DEBUG_AUGMENT_PROPAGATE_CHECK macro
mm/vmap: add DEBUG_AUGMENT_LOWEST_MATCH_CHECK macro
Vasily Averin (1):
netfilter: x_tables: gpf inside xt_find_revision()
Vincent Whitchurch (1):
cifs: Fix preauth hash corruption
Viresh Kumar (4):
sched/core: Create task_has_idle_policy() helper
sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq
sched/fair: Fall back to sched-idle CPU if idle CPU isn't found
sched/fair: Make sched-idle CPU selection consistent throughout
Yufen Yu (1):
block: only update parent bi_status when bio fail
Yumei Huang (1):
xfs: Fix assert failure in xfs_setattr_size()
wanglin (1):
RDMA/hns: fix timer, gid_type, scc cfg
zhangyi (F) (1):
ext4: do not try to set xattr into ea_inode if value is empty
arch/alpha/mm/fault.c | 2 +-
arch/arc/mm/fault.c | 1 -
arch/arm/mm/fault.c | 3 -
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/storage_ci_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/arm64/kernel/crash_dump.c | 2 +
arch/arm64/kernel/head.S | 1 +
arch/arm64/kernel/module.lds | 6 +-
arch/arm64/mm/fault.c | 5 -
arch/hexagon/mm/vm_fault.c | 1 -
arch/ia64/mm/fault.c | 1 -
arch/m68k/mm/fault.c | 3 -
arch/microblaze/mm/fault.c | 1 -
arch/mips/mm/fault.c | 1 -
arch/nds32/mm/fault.c | 1 -
arch/nios2/mm/fault.c | 3 -
arch/openrisc/mm/fault.c | 1 -
arch/parisc/mm/fault.c | 4 +-
arch/powerpc/mm/fault.c | 6 -
arch/riscv/mm/fault.c | 5 -
arch/s390/mm/fault.c | 5 +-
arch/sh/mm/fault.c | 1 -
arch/sparc/mm/fault_32.c | 1 -
arch/sparc/mm/fault_64.c | 1 -
arch/um/kernel/trap.c | 1 -
arch/unicore32/mm/fault.c | 4 +-
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/x86/configs/storage_ci_defconfig | 1 +
arch/x86/configs/syzkaller_defconfig | 1 +
arch/x86/kvm/svm.c | 1 +
arch/x86/mm/fault.c | 2 -
arch/x86/mm/pat.c | 3 +-
arch/xtensa/mm/fault.c | 1 -
block/bfq-iosched.c | 1 +
block/bio.c | 2 +-
block/blk-merge.c | 11 +-
block/blk-mq.c | 4 +-
block/blk-settings.c | 12 +
block/genhd.c | 4 +-
drivers/acpi/property.c | 44 +-
drivers/block/xen-blkback/blkback.c | 2 +-
drivers/char/random.c | 2 +-
drivers/gpu/drm/ttm/ttm_bo_vm.c | 12 +-
.../infiniband/hw/hns/hns_roce_hw_sysfs_v2.c | 2 +-
drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 16 +-
drivers/md/dm-bufio.c | 4 +
drivers/md/dm-core.h | 4 +
drivers/md/dm-crypt.c | 1 +
drivers/md/dm-ioctl.c | 2 +-
drivers/md/dm-table.c | 174 ++-
drivers/md/dm-verity-target.c | 2 +-
drivers/md/dm.c | 60 +
drivers/media/v4l2-core/v4l2-ioctl.c | 19 +-
drivers/net/ethernet/freescale/gianfar.c | 15 +
drivers/net/ethernet/sun/sunvnet_common.c | 23 +-
drivers/net/gtp.c | 5 +-
drivers/nvme/host/rdma.c | 7 +-
drivers/pci/pci.c | 4 +
drivers/target/target_core_pr.c | 15 +-
drivers/target/target_core_transport.c | 15 +-
drivers/usb/usbip/stub_dev.c | 32 +-
fs/cifs/cifsfs.c | 2 +-
fs/cifs/transport.c | 7 +-
fs/ext4/xattr.c | 6 +-
fs/isofs/dir.c | 1 +
fs/isofs/namei.c | 1 +
fs/nfs/nfs3xdr.c | 3 +-
fs/ocfs2/cluster/heartbeat.c | 8 +-
fs/proc/task_mmu.c | 3 +
fs/select.c | 10 +-
fs/userfaultfd.c | 26 +-
fs/xfs/xfs_iops.c | 2 +-
include/linux/device-mapper.h | 5 +
include/linux/icmpv6.h | 48 +-
include/linux/ipv6.h | 2 +-
include/linux/mm.h | 46 +-
include/linux/mutex.h | 2 +-
include/linux/netfilter/x_tables.h | 7 +-
include/linux/rmap.h | 3 +-
include/linux/sched/mm.h | 3 +-
include/linux/sched/topology.h | 3 +
include/linux/swap.h | 12 +-
include/linux/thread_info.h | 13 +
include/linux/userfaultfd_k.h | 4 +
include/linux/vmalloc.h | 6 +-
include/net/icmp.h | 10 +
include/net/tcp.h | 11 +-
include/target/target_core_backend.h | 1 +
include/trace/events/mmflags.h | 7 +
include/uapi/asm-generic/mman.h | 4 +
include/uapi/linux/userfaultfd.h | 3 +
init/Kconfig | 15 +
kernel/futex.c | 3 +-
kernel/jump_label.c | 26 +-
kernel/module.c | 21 +-
kernel/printk/printk_safe.c | 16 +-
kernel/sched/core.c | 39 +-
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 418 ++++++-
kernel/sched/features.h | 8 +
kernel/sched/sched.h | 28 +-
kernel/sched/sparsemask.h | 210 ++++
kernel/sched/stats.c | 15 +
kernel/sched/stats.h | 20 +
kernel/sched/topology.c | 141 ++-
kernel/time/alarmtimer.c | 2 +-
kernel/time/hrtimer.c | 62 +-
kernel/time/posix-cpu-timers.c | 2 +-
kernel/tracepoint.c | 80 +-
lib/logic_pio.c | 3 +
mm/Kconfig | 9 +
mm/filemap.c | 2 +-
mm/gup.c | 47 +-
mm/hugetlb.c | 38 +-
mm/internal.h | 6 +-
mm/memory.c | 35 +-
mm/mempolicy.c | 4 +-
mm/mmap.c | 207 ++++
mm/page_io.c | 11 +-
mm/slub.c | 14 +-
mm/swapfile.c | 2 +-
mm/userfaultfd.c | 26 +
mm/vmalloc.c | 1099 +++++++++++++----
net/core/skbuff.c | 14 +-
net/ipv4/icmp.c | 34 +
net/ipv4/netfilter/arp_tables.c | 16 +-
net/ipv4/netfilter/ip_tables.c | 16 +-
net/ipv4/tcp.c | 59 +-
net/ipv4/tcp_diag.c | 5 +-
net/ipv4/tcp_input.c | 6 +-
net/ipv4/tcp_ipv4.c | 23 +-
net/ipv4/tcp_minisocks.c | 4 +-
net/ipv4/tcp_output.c | 6 +-
net/ipv4/udp_offload.c | 2 +-
net/ipv6/icmp.c | 19 +-
net/ipv6/ip6_icmp.c | 46 +-
net/ipv6/netfilter/ip6_tables.c | 16 +-
net/ipv6/tcp_ipv6.c | 15 +-
net/netfilter/nf_conntrack_netlink.c | 1 +
net/netfilter/x_tables.c | 55 +-
net/qrtr/qrtr.c | 5 +
net/sunrpc/svc.c | 6 +-
net/xfrm/xfrm_interface.c | 6 +-
virt/kvm/kvm_main.c | 21 +-
148 files changed, 3024 insertions(+), 791 deletions(-)
create mode 100644 kernel/sched/sparsemask.h
--
2.25.1
1
66

[PATCH kernel-4.19 1/9] ext4: Fix unreport netlink message to userspace when fs abort
by Yang Yingliang 12 Apr '21
by Yang Yingliang 12 Apr '21
12 Apr '21
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
-----------------------------------------------
Fixes: 5aa03d66d1db ("ext4: make ext4_abort() use __ext4_error()")
Fixes: 12aed7b79111 ("ext4: report error to userspace by netlink")
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 3b2f7f7ea8cba..655ba77db225e 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -89,7 +89,6 @@ static void ext4_unregister_li_request(struct super_block *sb);
static void ext4_clear_request_list(void);
static struct inode *ext4_get_journal_inode(struct super_block *sb,
unsigned int journal_inum);
-static void ext4_netlink_send_info(struct super_block *sb, int ext4_errno);
static struct sock *ext4nl;
/*
@@ -590,7 +589,7 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
smp_wmb();
sb->s_flags |= SB_RDONLY;
out:
- ext4_netlink_send_info(sb, 1);
+ ext4_netlink_send_info(sb, force_ro ? 2 : 1);
}
static void flush_stashed_error_work(struct work_struct *work)
--
2.25.1
1
8

[PATCH kernel-4.19 01/74] net: fec: ptp: avoid register access when ipg clock is disabled
by Yang Yingliang 12 Apr '21
by Yang Yingliang 12 Apr '21
12 Apr '21
From: Heiko Thiery <heiko.thiery(a)gmail.com>
[ Upstream commit 6a4d7234ae9a3bb31181f348ade9bbdb55aeb5c5 ]
When accessing the timecounter register on an i.MX8MQ the kernel hangs.
This is only the case when the interface is down. This can be reproduced
by reading with 'phc_ctrl eth0 get'.
Like described in the change in 91c0d987a9788dcc5fe26baafd73bf9242b68900
the igp clock is disabled when the interface is down and leads to a
system hang.
So we check if the ptp clock status before reading the timecounter
register.
Signed-off-by: Heiko Thiery <heiko.thiery(a)gmail.com>
Acked-by: Richard Cochran <richardcochran(a)gmail.com>
Link: https://lore.kernel.org/r/20210225211514.9115-1-heiko.thiery@gmail.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/freescale/fec_ptp.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
index 7e892b1cbd3de..09a762eb4f09e 100644
--- a/drivers/net/ethernet/freescale/fec_ptp.c
+++ b/drivers/net/ethernet/freescale/fec_ptp.c
@@ -382,9 +382,16 @@ static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
u64 ns;
unsigned long flags;
+ mutex_lock(&adapter->ptp_clk_mutex);
+ /* Check the ptp clock */
+ if (!adapter->ptp_clk_on) {
+ mutex_unlock(&adapter->ptp_clk_mutex);
+ return -EINVAL;
+ }
spin_lock_irqsave(&adapter->tmreg_lock, flags);
ns = timecounter_read(&adapter->tc);
spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+ mutex_unlock(&adapter->ptp_clk_mutex);
*ts = ns_to_timespec64(ns);
--
2.25.1
1
73

[PATCH kernel-4.19 01/44] numa: Move the management structures for cdm nodes to ddr
by Yang Yingliang 12 Apr '21
by Yang Yingliang 12 Apr '21
12 Apr '21
From: Wang Wensheng <wangwensheng4(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The cdm nodes are easiler to raise an ECC error and it may cause the
kernel crash if the essential structures went wrong. So move the
management structures for hbm nodes to the ddr nodes of the same
partion to reduce the probability of kernel crashes.
Signed-off-by: Wang Wensheng <wangwensheng4(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/Kconfig | 10 ++++++++
arch/arm64/mm/numa.c | 54 +++++++++++++++++++++++++++++++++++++++-
include/linux/nodemask.h | 7 ++++++
mm/sparse.c | 8 +++---
4 files changed, 75 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b7caf370a14b7..3848c062ea2c5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1583,6 +1583,16 @@ config ASCEND_BOOT_CRASH_KERNEL
Usage:
1. add a node name:kexecmailbox to dts config.
2. after kexec run, set sysctl -w kernel.kexec_bios_start=1.
+
+config ASCEND_CLEAN_CDM
+ bool "move the management structure for HBM to DDR"
+ def_bool n
+ depends on COHERENT_DEVICE
+ help
+ The cdm nodes sometimes are easiler to raise an ECC error and it may
+ cause the kernel crash if the essential structures went wrong. So move
+ the management structures for hbm nodes to the ddr nodes of the same
+ partion to reduce the probability of kernel crashes.
endif
endmenu
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index a9d3ad5ee0cc3..a194bad6fdfcf 100644
--- a/arch/arm64/mm/numa.c
+++ b/arch/arm64/mm/numa.c
@@ -45,6 +45,57 @@ inline int arch_check_node_cdm(int nid)
return node_isset(nid, cdmmask);
}
+#ifdef CONFIG_ASCEND_CLEAN_CDM
+/**
+ * cdm_node_to_ddr_node - Convert the cdm node to the ddr node of the
+ * same partion.
+ * @nid: input node ID
+ *
+ * Here is a typical memory topology in usage.
+ * There are some DDR and HBM in each partion and DDRs present at first, then
+ * come all the HBMs of the first partion, then HBMs of the second partion, etc.
+ *
+ * -------------------------
+ * | P0 | P1 |
+ * ----------- | -----------
+ * |node0 DDR| | |node1 DDR|
+ * |---------- | ----------|
+ * |node2 HBM| | |node4 HBM|
+ * |---------- | ----------|
+ * |node3 HBM| | |node5 HBM|
+ * ----------- | -----------
+ *
+ * Return:
+ * This function returns a ddr node which is of the same partion with the input
+ * node if the input node is a HBM node.
+ * The input nid is returned if it is a DDR node or if the memory topology of
+ * the system doesn't apply to the above model.
+ */
+int __init cdm_node_to_ddr_node(int nid)
+{
+ nodemask_t ddr_mask;
+ int nr_ddr, cdm_per_part, fake_nid;
+ int nr_cdm = nodes_weight(cdmmask);
+
+ if (!nr_cdm || nodes_empty(numa_nodes_parsed))
+ return nid;
+
+ if (!node_isset(nid, cdmmask))
+ return nid;
+
+ nodes_xor(ddr_mask, cdmmask, numa_nodes_parsed);
+ nr_ddr = nodes_weight(ddr_mask);
+ cdm_per_part = nr_cdm / nr_ddr ? : 1;
+
+ fake_nid = (nid - nr_ddr) / cdm_per_part;
+ fake_nid = !node_isset(fake_nid, cdmmask) ? fake_nid : nid;
+
+ pr_info("nid: %d, fake_nid: %d\n", nid, fake_nid);
+
+ return fake_nid;
+}
+#endif
+
static int __init cdm_nodes_setup(char *s)
{
int nid;
@@ -264,11 +315,12 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
u64 nd_pa;
void *nd;
int tnid;
+ int fake_nid = cdm_node_to_ddr_node(nid);
if (start_pfn >= end_pfn)
pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
- nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
+ nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, fake_nid);
nd = __va(nd_pa);
/* report and initialize */
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 41fb047bdba80..7c0571b95ce4d 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -508,6 +508,12 @@ static inline int node_random(const nodemask_t *mask)
#ifdef CONFIG_COHERENT_DEVICE
extern int arch_check_node_cdm(int nid);
+#ifdef CONFIG_ASCEND_CLEAN_CDM
+extern int cdm_node_to_ddr_node(int nid);
+#else
+static inline int cdm_node_to_ddr_node(int nid) { return nid; }
+#endif
+
static inline nodemask_t system_mem_nodemask(void)
{
nodemask_t system_mem;
@@ -551,6 +557,7 @@ static inline void node_clear_state_cdm(int node)
#else
static inline int arch_check_node_cdm(int nid) { return 0; }
+static inline int cdm_node_to_ddr_node(int nid) { return nid; }
static inline nodemask_t system_mem_nodemask(void)
{
diff --git a/mm/sparse.c b/mm/sparse.c
index 9854aff6b4193..581982a376bdd 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -458,21 +458,23 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
{
unsigned long pnum, usemap_longs, *usemap;
struct page *map;
+ int fake_nid = cdm_node_to_ddr_node(nid);
usemap_longs = BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS);
- usemap = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nid),
+ usemap = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(fake_nid),
usemap_size() *
map_count);
if (!usemap) {
pr_err("%s: node[%d] usemap allocation failed", __func__, nid);
goto failed;
}
- sparse_buffer_init(map_count * section_map_size(), nid);
+
+ sparse_buffer_init(map_count * section_map_size(), fake_nid);
for_each_present_section_nr(pnum_begin, pnum) {
if (pnum >= pnum_end)
break;
- map = sparse_mem_map_populate(pnum, nid, NULL);
+ map = sparse_mem_map_populate(pnum, fake_nid, NULL);
if (!map) {
pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
__func__, nid);
--
2.25.1
1
43

08 Apr '21
bugfix for openEuler 20.03 @20210408
Guoqing Jiang (4):
md: add new workqueue for delete rdev
md: don't flush workqueue unconditionally in md_open
md: flush md_rdev_misc_wq for HOT_ADD_DISK case
md: fix the checking of wrong work queue
Jan Kara (1):
ext4: fix timer use-after-free on failed mount
Junxiao Bi (2):
md: fix deadlock causing by sysfs_notify
md: get sysfs entry after redundancy attr group create
Lu Jialin (1):
config: Enable files cgroup on x86
Mauricio Faria de Oliveira (1):
loop: fix I/O error on fsync() in detached loop devices
Mike Christie (7):
scsi: libiscsi: Fix iscsi_prep_scsi_cmd_pdu() error handling
scsi: libiscsi: Drop taskqueuelock
scsi: libiscsi: Fix iscsi_task use after free()
scsi: libiscsi: Fix iSCSI host workq destruction
scsi: libiscsi: Add helper to calculate max SCSI cmds per session
scsi: iscsi_tcp: Fix shost can_queue initialization
scsi: libiscsi: Reset max/exp cmdsn during recovery
Oscar Salvador (1):
mm,hwpoison: return -EBUSY when migration fails
Shijie Luo (1):
ext4: fix potential error in ext4_do_update_inode
Theodore Ts'o (1):
ext4: don't leak old mountpoint samples
Vlastimil Babka (1):
mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)
Wu Bo (1):
scsi: libiscsi: Fix error count for active session
Yang Yingliang (1):
mm: slub: avoid wrong report about corrupted freelist
Ye Bin (1):
ext4: Fix unreport netlink message to userspace when fs abort
Zhao Heming (1):
md/bitmap: fix memory leak of temporary bitmap
yangerkun (1):
scsi: libiscsi: convert change of struct iscsi_conn to fix KABI
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 2 +-
drivers/block/loop.c | 3 +
drivers/md/md-bitmap.c | 5 +-
drivers/md/md.c | 89 +++++--
drivers/md/md.h | 8 +-
drivers/md/raid10.c | 2 +-
drivers/md/raid5.c | 6 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 2 -
drivers/scsi/iscsi_tcp.c | 9 +-
drivers/scsi/libiscsi.c | 347 +++++++++++++++++----------
drivers/scsi/libiscsi_tcp.c | 86 ++++---
fs/ext4/file.c | 2 +-
fs/ext4/inode.c | 8 +-
fs/ext4/super.c | 5 +-
include/linux/slab.h | 4 +
include/scsi/libiscsi.h | 6 +-
mm/memory-failure.c | 8 +-
mm/slab_common.c | 11 +-
mm/slob.c | 41 +++-
mm/slub.c | 12 +-
21 files changed, 435 insertions(+), 222 deletions(-)
--
2.25.1
1
25
Adds Zhaoxin CPU support
LeoLiu-oc (33):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
x86/perf: Add hardware performance events support for Zhaoxin CPU.
PCI: Add Zhaoxin Vendor ID
ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
xhci: Add Zhaoxin xHCI LPM U1/U2 feature support
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
xhci: fix issue of cross page boundary in TRB prefetch
xhci: Show Zhaoxin XHCI root hub speed correctly
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
xhci: Adjust the UHCI Controllers bit value
xhci: fix issue with resume from system Sx state
x86/apic: Mask IOAPIC entries when disabling the local APIC
USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
iommu/vt-d:Add support for detecting ACPI device in RMRR
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 +
arch/x86/Kconfig | 15 +-
arch/x86/Kconfig.cpu | 13 +
arch/x86/crypto/crc32c-intel_glue.c | 7 +
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++
arch/x86/include/asm/cpufeatures.h | 21 +
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/umip.h | 4 +-
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/acpi/cstate.c | 27 +
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +-
arch/x86/kernel/cpu/common.c | 9 +-
arch/x86/kernel/cpu/mce/core.c | 97 +-
arch/x86/kernel/cpu/mce/intel.c | 11 +-
arch/x86/kernel/cpu/mce/internal.h | 6 +
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
arch/x86/kernel/cpu/zhaoxin.c | 170 +++
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 ++++++
drivers/iommu/dmar.c | 75 +-
drivers/iommu/intel-iommu.c | 24 +-
drivers/pci/quirks.c | 31 +
drivers/usb/core/hcd-pci.c | 10 +
drivers/usb/host/uhci-pci.c | 3 +
drivers/usb/host/xhci-mem.c | 11 +-
drivers/usb/host/xhci-pci.c | 12 +
drivers/usb/host/xhci.c | 53 +-
drivers/usb/host/xhci.h | 2 +
include/linux/dmar.h | 11 +-
include/linux/pci_ids.h | 2 +
sound/pci/hda/hda_controller.c | 17 +-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +-
sound/pci/hda/patch_hdmi.c | 26 +
.../arch/x86/include/asm/disabled-features.h | 2 +-
47 files changed, 3143 insertions(+), 101 deletions(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
create mode 100644 drivers/ata/sata_zhaoxin.c
--
2.25.1
3
35
Adds Zhaoxin CPU support
LeoLiu-oc (33):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
x86/perf: Add hardware performance events support for Zhaoxin CPU.
PCI: Add Zhaoxin Vendor ID
ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
xhci: Add Zhaoxin xHCI LPM U1/U2 feature support
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
xhci: fix issue of cross page boundary in TRB prefetch
xhci: Show Zhaoxin XHCI root hub speed correctly
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
xhci: Adjust the UHCI Controllers bit value
xhci: fix issue with resume from system Sx state
x86/apic: Mask IOAPIC entries when disabling the local APIC
USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
iommu/vt-d:Add support for detecting ACPI device in RMRR
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 +
arch/x86/Kconfig | 15 +-
arch/x86/Kconfig.cpu | 13 +
arch/x86/crypto/crc32c-intel_glue.c | 7 +
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++
arch/x86/include/asm/cpufeatures.h | 21 +
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/umip.h | 4 +-
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/acpi/cstate.c | 27 +
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +-
arch/x86/kernel/cpu/common.c | 9 +-
arch/x86/kernel/cpu/mce/core.c | 97 +-
arch/x86/kernel/cpu/mce/intel.c | 11 +-
arch/x86/kernel/cpu/mce/internal.h | 6 +
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
arch/x86/kernel/cpu/zhaoxin.c | 170 +++
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 ++++++
drivers/iommu/dmar.c | 75 +-
drivers/iommu/intel-iommu.c | 24 +-
drivers/pci/quirks.c | 31 +
drivers/usb/core/hcd-pci.c | 10 +
drivers/usb/host/uhci-pci.c | 3 +
drivers/usb/host/xhci-mem.c | 11 +-
drivers/usb/host/xhci-pci.c | 12 +
drivers/usb/host/xhci.c | 53 +-
drivers/usb/host/xhci.h | 2 +
include/linux/dmar.h | 11 +-
include/linux/pci_ids.h | 2 +
sound/pci/hda/hda_controller.c | 17 +-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +-
sound/pci/hda/patch_hdmi.c | 26 +
.../arch/x86/include/asm/disabled-features.h | 2 +-
47 files changed, 3143 insertions(+), 101 deletions(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
create mode 100644 drivers/ata/sata_zhaoxin.c
--
2.25.1
5
39

[PATCH openEuler-1.0-LTS] nvme-fabrics: fix kabi broken due to adding fields in struct nvme_ctrl
by Cheng Jian 07 Apr '21
by Cheng Jian 07 Apr '21
07 Apr '21
From: Chen Zhou <chenzhou10(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=18
CVE: NA
-------------------------------------------------
Commit 7adf22ccb313 ("nvme-fabrics: reject I/O to offline device")
introduced kabi broken, just revert this commit.
Signed-off-by: Chen Zhou <chenzhou10(a)huawei.com>
Reviewed-by: Chao Leng <lengchao(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
drivers/nvme/host/core.c | 51 ++---------------------------------
drivers/nvme/host/fabrics.c | 28 +++----------------
drivers/nvme/host/fabrics.h | 5 ----
drivers/nvme/host/multipath.c | 2 --
drivers/nvme/host/nvme.h | 3 ---
5 files changed, 6 insertions(+), 83 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 779306e640e9..8404d3275ce0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -129,37 +129,6 @@ static void nvme_queue_scan(struct nvme_ctrl *ctrl)
queue_work(nvme_wq, &ctrl->scan_work);
}
-static void nvme_failfast_work(struct work_struct *work)
-{
- struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
- struct nvme_ctrl, failfast_work);
-
- if (ctrl->state != NVME_CTRL_CONNECTING)
- return;
-
- set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
- dev_info(ctrl->device, "failfast expired\n");
- nvme_kick_requeue_lists(ctrl);
-}
-
-static inline void nvme_start_failfast_work(struct nvme_ctrl *ctrl)
-{
- if (!ctrl->opts || ctrl->opts->fast_io_fail_tmo == -1)
- return;
-
- schedule_delayed_work(&ctrl->failfast_work,
- ctrl->opts->fast_io_fail_tmo * HZ);
-}
-
-static inline void nvme_stop_failfast_work(struct nvme_ctrl *ctrl)
-{
- if (!ctrl->opts)
- return;
-
- cancel_delayed_work_sync(&ctrl->failfast_work);
- clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
-}
-
int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
{
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
@@ -415,21 +384,8 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
ctrl->state = new_state;
spin_unlock_irqrestore(&ctrl->lock, flags);
- if (changed) {
- switch (ctrl->state) {
- case NVME_CTRL_LIVE:
- if (old_state == NVME_CTRL_CONNECTING)
- nvme_stop_failfast_work(ctrl);
- nvme_kick_requeue_lists(ctrl);
- break;
- case NVME_CTRL_CONNECTING:
- if (old_state == NVME_CTRL_RESETTING)
- nvme_start_failfast_work(ctrl);
- break;
- default:
- break;
- }
- }
+ if (changed && ctrl->state == NVME_CTRL_LIVE)
+ nvme_kick_requeue_lists(ctrl);
return changed;
}
EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
@@ -3711,7 +3667,6 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
{
nvme_mpath_stop(ctrl);
nvme_stop_keep_alive(ctrl);
- nvme_stop_failfast_work(ctrl);
flush_work(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fw_act_work);
if (ctrl->ops->stop_ctrl)
@@ -3776,7 +3731,6 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
int ret;
ctrl->state = NVME_CTRL_NEW;
- clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
spin_lock_init(&ctrl->lock);
mutex_init(&ctrl->scan_lock);
INIT_LIST_HEAD(&ctrl->namespaces);
@@ -3792,7 +3746,6 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
INIT_DELAYED_WORK(&ctrl->ka_work, nvme_keep_alive_work);
memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
- INIT_DELAYED_WORK(&ctrl->failfast_work, nvme_failfast_work);
BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) >
PAGE_SIZE);
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 650b3bd89968..738794af3f38 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -550,7 +550,6 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
{
if (ctrl->state != NVME_CTRL_DELETING &&
ctrl->state != NVME_CTRL_DEAD &&
- !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&
!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
return BLK_STS_RESOURCE;
@@ -608,7 +607,6 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_HOST_TRADDR, "host_traddr=%s" },
{ NVMF_OPT_HOST_ID, "hostid=%s" },
{ NVMF_OPT_DUP_CONNECT, "duplicate_connect" },
- { NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
{ NVMF_OPT_ERR, NULL }
};
@@ -628,7 +626,6 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY;
opts->kato = NVME_DEFAULT_KATO;
opts->duplicate_connect = false;
- opts->fast_io_fail_tmo = NVMF_DEF_FAIL_FAST_TMO;
options = o = kstrdup(buf, GFP_KERNEL);
if (!options)
@@ -753,17 +750,6 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
pr_warn("ctrl_loss_tmo < 0 will reconnect forever\n");
ctrl_loss_tmo = token;
break;
- case NVMF_OPT_FAIL_FAST_TMO:
- if (match_int(args, &token)) {
- ret = -EINVAL;
- goto out;
- }
-
- if (token >= 0)
- pr_warn("I/O will fail on after %d sec reconnect\n",
- token);
- opts->fast_io_fail_tmo = token;
- break;
case NVMF_OPT_HOSTNQN:
if (opts->host) {
pr_err("hostnqn already user-assigned: %s\n",
@@ -844,17 +830,11 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->nr_io_queues = 0;
opts->duplicate_connect = true;
}
-
- if (ctrl_loss_tmo < 0) {
+ if (ctrl_loss_tmo < 0)
opts->max_reconnects = -1;
- } else {
+ else
opts->max_reconnects = DIV_ROUND_UP(ctrl_loss_tmo,
opts->reconnect_delay);
- if (ctrl_loss_tmo < opts->fast_io_fail_tmo)
- pr_warn("failfast tmo (%d) > ctrl_loss_tmo (%d)\n",
- opts->fast_io_fail_tmo,
- ctrl_loss_tmo);
- }
if (!opts->host) {
kref_get(&nvmf_default_host->ref);
@@ -923,8 +903,8 @@ EXPORT_SYMBOL_GPL(nvmf_free_options);
#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN)
#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \
NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
- NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\
- NVMF_OPT_FAIL_FAST_TMO)
+ NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT)
+
static struct nvme_ctrl *
nvmf_create_ctrl(struct device *dev, const char *buf, size_t count)
{
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a7a3100714b1..188ebbeec32c 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -24,8 +24,6 @@
/* default to 600 seconds of reconnect attempts before giving up */
#define NVMF_DEF_CTRL_LOSS_TMO 600
#define NVMF_DEF_RECONNECT_FOREVER -1
-/* set default fail fast timeout to 150s */
-#define NVMF_DEF_FAIL_FAST_TMO 150
/*
* Define a host as seen by the target. We allocate one at boot, but also
@@ -61,7 +59,6 @@ enum {
NVMF_OPT_CTRL_LOSS_TMO = 1 << 11,
NVMF_OPT_HOST_ID = 1 << 12,
NVMF_OPT_DUP_CONNECT = 1 << 13,
- NVMF_OPT_FAIL_FAST_TMO = 1 << 20,
};
/**
@@ -89,7 +86,6 @@ enum {
* @max_reconnects: maximum number of allowed reconnect attempts before removing
* the controller, (-1) means reconnect forever, zero means remove
* immediately;
- * @fast_io_fail_tmo: Fast I/O fail timeout in seconds
*/
struct nvmf_ctrl_options {
unsigned mask;
@@ -106,7 +102,6 @@ struct nvmf_ctrl_options {
unsigned int kato;
struct nvmf_host *host;
int max_reconnects;
- int fast_io_fail_tmo;
};
/*
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index e13ff4dfa3df..bde7d0a61269 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -198,8 +198,6 @@ static bool nvme_available_path(struct nvme_ns_head *head)
struct nvme_ns *ns;
list_for_each_entry_rcu(ns, &head->list, siblings) {
- if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
- continue;
switch (ns->ctrl->state) {
case NVME_CTRL_LIVE:
case NVME_CTRL_RESETTING:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 4617168aa73f..366bbb8c35b5 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -204,7 +204,6 @@ struct nvme_ctrl {
struct work_struct scan_work;
struct work_struct async_event_work;
struct delayed_work ka_work;
- struct delayed_work failfast_work;
struct nvme_command ka_cmd;
struct work_struct fw_act_work;
unsigned long events;
@@ -239,8 +238,6 @@ struct nvme_ctrl {
u16 icdoff;
u16 maxcmd;
int nr_reconnects;
- unsigned long flags;
-#define NVME_CTRL_FAILFAST_EXPIRED 0
struct nvmf_ctrl_options *opts;
struct page *discard_page;
--
2.25.1
2
1

07 Apr '21
1. xhci fixes for usb openEuler 20.03
2. prepare for add Zhaoxin CPU support
One of these patches is required for the Zhaoxin CPU patches.
Mathias Nyman (4):
xhci: Force Maximum Packet size for Full-speed bulk devices to valid
range.
xhci: fix runtime pm enabling for quirky Intel hosts
xhci: Fix memory leak when caching protocol extended capability PSI
tables - take 2
xhci: apply XHCI_PME_STUCK_QUIRK to Intel Comet Lake platforms
drivers/usb/host/xhci-hub.c | 25 ++++++++-----
drivers/usb/host/xhci-mem.c | 71 ++++++++++++++++++++++++-------------
drivers/usb/host/xhci-pci.c | 10 +++---
drivers/usb/host/xhci.h | 16 +++++++--
4 files changed, 82 insertions(+), 40 deletions(-)
--
2.25.1
4
8
From: chenjiajun <chenjiajun8(a)huawei.com>
virt inclusion
category: feature
bugzilla: 46853
CVE: NA
some improvement for vcpu_stat:
1. Add fastpath kvm exit handles to vcpu_stat, avoid the big deviation
between the statistical data and the actual scene.
2. Export preemption timer kvm exits to statistical data.
--
ChangeList:
v2:
move preemption timer exits to the end of vcpu_stat
v1:
kvm: debugfs: some improvement for vcpu_stat
chenjiajun (2):
kvm: debugfs: add fastpath msr_wr exits to debugfs statistics
kvm: debugfs: add EXIT_REASON_PREEMPTION_TIMER to vcpu_stat
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx/vmx.c | 3 +++
arch/x86/kvm/x86.c | 1 +
3 files changed, 5 insertions(+)
--
2.29.GIT
1
2
# 1 问题背景
-------
openEuler 20.03 LTS 使用 kernel-4.19 作为开发分支进行补丁适配开发。
zhaoxin 的补丁基于 kernel-4.19 开发,然后我在适配 openEuler-2.0-LTS(20.03)
的时候遇到了一些冲突和编译错误。
冲突都比较简单,我已经解决了。后续可以发出来供大家 review。
但是编译错误需要大家一起看下。
# 2 问题描述
-------
openEuler apply 兆芯补丁遇到如下编译错误:
# 3 问题分析
-------
导致编译错误的补丁:
[PATCH kernel-4.19 v3] xhci: Show Zhaoxin XHCI root hub speed correctly
导致编译错误的原因是:
社区的如下补丁,kernel-4.19 合入了,但是 openEuler-1.0-LTS 分支未合入
git log -p kernel-4.19 4e0f891374d8
这些补丁来自于一组 patchset,5.6-rc3 合入 linux mainline
xhci: Fix memory leak when caching protocol extended capability PSI
tables - take 2
https://patchwork.kernel.org/project/linux-usb/cover/20200210134553.9144-1-…
其中最关键的补丁是
xhci: Fix memory leak when caching protocol extended capability PSI
tables - take 2
# 4 需决策内容
-------
目前有如下 2 个解决方案,大家讨论下看看哪个更好一些。
1. 依赖补丁是一组 bugfix,已经合入 kernel-4.19,补丁直接合入 openEuler
20.03
缺点:需要检视依赖补丁的影响。以及是否还有其他依赖补丁。补丁我已经适配好了,如果选择这个方案,我发出来大家
review下。另外社区是一组 patchset 4个补丁先后都合入了 linux 4.19
stable。20.03
是这4个补丁都合入,还是只需要单点合入依赖的那一个补丁。看起来这个四个补丁之间没有相互依赖关系,都是单独的
bugfix。
2. 了解到兆芯不带依赖补丁,也有一组 patch,是否针对 openEuler 20.03
合入这组补丁。
缺点:维护工作量大,需要同时维护两个分支,两组补丁。
2
1
Adds Zhaoxin CPU support
LeoLiu-oc (33):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
x86/perf: Add hardware performance events support for Zhaoxin CPU.
PCI: Add Zhaoxin Vendor ID
ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
xhci: Add Zhaoxin xHCI LPM U1/U2 feature support
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
xhci: fix issue of cross page boundary in TRB prefetch
xhci: Show Zhaoxin XHCI root hub speed correctly
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
xhci: Adjust the UHCI Controllers bit value
xhci: fix issue with resume from system Sx state
x86/apic: Mask IOAPIC entries when disabling the local APIC
USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
iommu/vt-d:Add support for detecting ACPI device in RMRR
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 +
arch/x86/Kconfig | 15 +-
arch/x86/Kconfig.cpu | 13 +
arch/x86/crypto/crc32c-intel_glue.c | 7 +
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++
arch/x86/include/asm/cpufeatures.h | 21 +
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/umip.h | 4 +-
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/acpi/cstate.c | 27 +
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +-
arch/x86/kernel/cpu/common.c | 9 +-
arch/x86/kernel/cpu/mce/core.c | 97 +-
arch/x86/kernel/cpu/mce/intel.c | 11 +-
arch/x86/kernel/cpu/mce/internal.h | 6 +
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
arch/x86/kernel/cpu/zhaoxin.c | 170 +++
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 ++++++
drivers/iommu/dmar.c | 75 +-
drivers/iommu/intel-iommu.c | 24 +-
drivers/pci/quirks.c | 31 +
drivers/usb/core/hcd-pci.c | 10 +
drivers/usb/host/uhci-pci.c | 3 +
drivers/usb/host/xhci-mem.c | 11 +-
drivers/usb/host/xhci-pci.c | 12 +
drivers/usb/host/xhci.c | 53 +-
drivers/usb/host/xhci.h | 2 +
include/linux/dmar.h | 11 +-
include/linux/pci_ids.h | 2 +
sound/pci/hda/hda_controller.c | 17 +-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +-
sound/pci/hda/patch_hdmi.c | 26 +
.../arch/x86/include/asm/disabled-features.h | 2 +-
47 files changed, 3143 insertions(+), 101 deletions(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
create mode 100644 drivers/ata/sata_zhaoxin.c
--
2.25.1
1
33
Adds Zhaoxin CPU support
LeoLiu-oc (33):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
x86/perf: Add hardware performance events support for Zhaoxin CPU.
PCI: Add Zhaoxin Vendor ID
ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
xhci: Add Zhaoxin xHCI LPM U1/U2 feature support
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
xhci: fix issue of cross page boundary in TRB prefetch
xhci: Show Zhaoxin XHCI root hub speed correctly
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
xhci: Adjust the UHCI Controllers bit value
xhci: fix issue with resume from system Sx state
x86/apic: Mask IOAPIC entries when disabling the local APIC
USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
iommu/vt-d:Add support for detecting ACPI device in RMRR
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 +
arch/x86/Kconfig | 15 +-
arch/x86/Kconfig.cpu | 13 +
arch/x86/crypto/crc32c-intel_glue.c | 7 +
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++
arch/x86/include/asm/cpufeatures.h | 21 +
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/umip.h | 4 +-
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/acpi/cstate.c | 27 +
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +-
arch/x86/kernel/cpu/common.c | 9 +-
arch/x86/kernel/cpu/mce/core.c | 97 +-
arch/x86/kernel/cpu/mce/intel.c | 11 +-
arch/x86/kernel/cpu/mce/internal.h | 6 +
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
arch/x86/kernel/cpu/zhaoxin.c | 170 +++
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 ++++++
drivers/iommu/dmar.c | 75 +-
drivers/iommu/intel-iommu.c | 24 +-
drivers/pci/quirks.c | 31 +
drivers/usb/core/hcd-pci.c | 10 +
drivers/usb/host/uhci-pci.c | 3 +
drivers/usb/host/xhci-mem.c | 11 +-
drivers/usb/host/xhci-pci.c | 12 +
drivers/usb/host/xhci.c | 53 +-
drivers/usb/host/xhci.h | 2 +
include/linux/dmar.h | 11 +-
include/linux/pci_ids.h | 2 +
sound/pci/hda/hda_controller.c | 17 +-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +-
sound/pci/hda/patch_hdmi.c | 26 +
.../arch/x86/include/asm/disabled-features.h | 2 +-
47 files changed, 3143 insertions(+), 101 deletions(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
create mode 100644 drivers/ata/sata_zhaoxin.c
--
2.25.1
1
33
Adds Zhaoxin CPU support
LeoLiu-oc (33):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
x86/perf: Add hardware performance events support for Zhaoxin CPU.
PCI: Add Zhaoxin Vendor ID
ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
xhci: Add Zhaoxin xHCI LPM U1/U2 feature support
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
xhci: fix issue of cross page boundary in TRB prefetch
xhci: Show Zhaoxin XHCI root hub speed correctly
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
xhci: Adjust the UHCI Controllers bit value
xhci: fix issue with resume from system Sx state
x86/apic: Mask IOAPIC entries when disabling the local APIC
USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
iommu/vt-d:Add support for detecting ACPI device in RMRR
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 +
arch/x86/Kconfig | 15 +-
arch/x86/Kconfig.cpu | 13 +
arch/x86/crypto/crc32c-intel_glue.c | 7 +
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++
arch/x86/include/asm/cpufeatures.h | 21 +
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/processor.h | 3 +-
arch/x86/include/asm/umip.h | 4 +-
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/acpi/cstate.c | 27 +
arch/x86/kernel/apic/apic.c | 7 +
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +-
arch/x86/kernel/cpu/common.c | 9 +-
arch/x86/kernel/cpu/mce/core.c | 97 +-
arch/x86/kernel/cpu/mce/intel.c | 11 +-
arch/x86/kernel/cpu/mce/internal.h | 6 +
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
arch/x86/kernel/cpu/zhaoxin.c | 170 +++
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 ++++++
drivers/iommu/dmar.c | 75 +-
drivers/iommu/intel-iommu.c | 24 +-
drivers/pci/quirks.c | 31 +
drivers/usb/core/hcd-pci.c | 10 +
drivers/usb/host/uhci-pci.c | 3 +
drivers/usb/host/xhci-mem.c | 11 +-
drivers/usb/host/xhci-pci.c | 12 +
drivers/usb/host/xhci.c | 53 +-
drivers/usb/host/xhci.h | 2 +
include/linux/dmar.h | 11 +-
include/linux/pci_ids.h | 2 +
sound/pci/hda/hda_controller.c | 17 +-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +-
sound/pci/hda/patch_hdmi.c | 26 +
.../arch/x86/include/asm/disabled-features.h | 2 +-
47 files changed, 3143 insertions(+), 101 deletions(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
create mode 100644 drivers/ata/sata_zhaoxin.c
--
2.25.1
1
33
openEuler x86 config update
Cheng Jian (3):
x86/config: enable files cgroup
x86/config: enable pagecache feature
x86/config: enable security_path feature
arch/x86/configs/openeuler_defconfig | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--
2.25.1
2
5

01 Apr '21
Some Zhaoxin xHCI controllers follow usb3.1 spec,
but only support gen1 speed 5G. While in Linux kernel,
if xHCI suspport usb3.1,root hub speed will show on 10G.
To fix this issue, read usb speed ID supported by xHCI
to determine root hub speed.
The patch is scheduled to be submitted to the kernel mainline in 2021.
---
v2->v3
- Fix a code logic issue.
v1->v2:
- Use quirks instead of vendor id.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index fad995b5635e..d1c87a2a8e06 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5079,6 +5079,7 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
*/
struct device *dev = hcd->self.sysdev;
unsigned int minor_rev;
+ u8 i, j;
int retval;
/* Accept arbitrarily long scatter-gather lists */
@@ -5133,6 +5134,24 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
break;
}
+
+ /* usb3.1 has gen1 and gen2, Some zx's xHCI controller that
follow usb3.1 spec
+ * but only support gen1
+ */
+ if (xhci->quirks & XHCI_ZHAOXIN_HOST) {
+ minor_rev = 0;
+ for (j = 0; j < xhci->num_port_caps; j++) {
+ for (i = 0; i < xhci->port_caps[j].psi_count; i++) {
+ if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5)
+ minor_rev = 1;
+ }
+ if (minor_rev != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
+ }
+ }
+ }
+
xhci_info(xhci, "Host supports USB 3.%x %sSuperSpeed\n",
minor_rev,
minor_rev ? "Enhanced " : "");
--
2.20.1
2
1

[PATCH kernel-4.19 v3] xhci: fix issue of cross page boundary in TRB prefetch
by LeoLiuoc 01 Apr '21
by LeoLiuoc 01 Apr '21
01 Apr '21
On some Zhaoxin platforms, xHCI will prefetch TRB for performance
improvement. However this TRB prefetch mechanism may cross page boundary,
which may access memory not belong to xHCI. In order to fix this issue,
using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
---
v2->v3:
- Fix a code logic issues.
v1->v2:
- Use quirks instead of vendor id.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci-mem.c | 11 +++++++++--
drivers/usb/host/xhci-pci.c | 5 +++++
drivers/usb/host/xhci.h | 1 +
3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 9e87c282a743..a6101f095db8 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -2450,8 +2450,15 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
* and our use of dma addresses in the trb_address_map radix tree
needs
* TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
*/
- xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
- TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ /* With xHCI TRB prefetch patch:To fix cross page boundry access issue
+ * in IOV environment */
+ if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH) {
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2);
+ } else {
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ }
/* See Table 46 and Note on Figure 55 */
xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 6c6b29901c5e..798b660f2fd0 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -237,6 +237,11 @@ static void xhci_pci_quirks(struct device *dev,
struct xhci_hcd *xhci)
pdev->device == 0x3432)
xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN &&
+ (pdev->device == 0x9202 ||
+ pdev->device == 0x9203))
+ xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
+
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
xhci->quirks |= XHCI_BROKEN_STREAMS;
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 069390a1f2ac..3ae8e25a2622 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1874,6 +1874,7 @@ struct xhci_hcd {
#define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35)
#define XHCI_ZHAOXIN_HOST BIT_ULL(36)
#define XHCI_DISABLE_SPARSE BIT_ULL(38)
+#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(39)
unsigned int num_active_eps;
unsigned int limit_active_eps;
--
2.20.1
2
1
This set of patches is to add support for Zhaoxin Family 7 CPUs.
With these patches, the kernel can identify Zhaoxin CPUs features
and Zhaoxin CPU topology information.
LeoLiu-oc (6):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
MAINTAINERS | 6 +
arch/x86/Kconfig.cpu | 13 +++
arch/x86/include/asm/cpufeatures.h | 21 ++++
arch/x86/include/asm/processor.h | 3 +-
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +++++---
arch/x86/kernel/cpu/zhaoxin.c | 170 +++++++++++++++++++++++++++++
7 files changed, 243 insertions(+), 18 deletions(-)
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
--
2.20.1
2
1
openEuler x86 config update
Cheng Jian (2):
x86/config: enable files cgroup
x86/config: enable some performance or security features
arch/x86/configs/openeuler_defconfig | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--
2.25.1
2
4

[PATCH openEuler-20.09] brcmfmac: Loading the correct firmware for brcm43456
by Cheng Jian 01 Apr '21
by Cheng Jian 01 Apr '21
01 Apr '21
From: Ondrej Jirman <megous(a)megous.com>
mainline inclusion
from mainline-v5.2-rc1
commit e3062e05e1cfe378bb9b3fa0bef46711372bcf13
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3AUFW
CVE: NA
------------------------------------------------
SDIO based brcm43456 is currently misdetected as brcm43455 and the wrong
firmware name is used. Correct the detection and load the correct
firmware file. Chiprev for brcm43456 is "9".
Signed-off-by: Ondrej Jirman <megous(a)megous.com>
Signed-off-by: Kalle Valo <kvalo(a)codeaurora.org>
Signed-off-by: Fang Yafen <yafen(a)iscas.ac.cn>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
index abaed2fa2def..18e9e52f8ee7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
@@ -621,6 +621,7 @@ BRCMF_FW_DEF(43430A0, "brcmfmac43430a0-sdio");
/* Note the names are not postfixed with a1 for backward compatibility */
BRCMF_FW_DEF(43430A1, "brcmfmac43430-sdio");
BRCMF_FW_DEF(43455, "brcmfmac43455-sdio");
+BRCMF_FW_DEF(43456, "brcmfmac43456-sdio");
BRCMF_FW_DEF(4354, "brcmfmac4354-sdio");
BRCMF_FW_DEF(4356, "brcmfmac4356-sdio");
BRCMF_FW_DEF(4373, "brcmfmac4373-sdio");
@@ -640,7 +641,8 @@ static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = {
BRCMF_FW_ENTRY(BRCM_CC_4339_CHIP_ID, 0xFFFFFFFF, 4339),
BRCMF_FW_ENTRY(BRCM_CC_43430_CHIP_ID, 0x00000001, 43430A0),
BRCMF_FW_ENTRY(BRCM_CC_43430_CHIP_ID, 0xFFFFFFFE, 43430A1),
- BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFFC0, 43455),
+ BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0x00000200, 43456),
+ BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFDC0, 43455),
BRCMF_FW_ENTRY(BRCM_CC_4354_CHIP_ID, 0xFFFFFFFF, 4354),
BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356),
BRCMF_FW_ENTRY(CY_CC_4373_CHIP_ID, 0xFFFFFFFF, 4373)
--
2.25.1
1
0

01 Apr '21
bugfix for openEuler 20.03 @20210401
Barry Song (1):
net: hns: use IRQ_NOAUTOEN to avoid irq is enabled due to request_irq
Chiqijun (4):
net/hinic: permit configuration of rx-vlan-filter with ethtool
net/hinic: Add XDP support for pass and drop actions
net/hinic: Add support for hinic PMD on VF
net/hinic: update hinic version to 2.3.2.18
Colin Ian King (1):
net: hns: make arrays static, makes object smaller
Ding Hui (1):
scsi: ses: Fix crash caused by kfree an invalid pointer
Guangbin Huang (1):
net: hns3: PF add support for pushing link status to VFs
Gustavo A. R. Silva (1):
net: hns: Replace zero-length array with flexible-array member
Jason Yan (1):
net: hns: use true,false for bool variables
Krzysztof Wilczynski (1):
net: hns: Move static keyword to the front of declaration
Naixin Yu (5):
Huawei BMA: Adding Huawei BMA driver: host_edma_drv
Huawei BMA: Adding Huawei BMA driver: host_cdev_drv
Huawei BMA: Adding Huawei BMA driver: host_veth_drv
Huawei BMA: Adding Huawei BMA driver: cdev_veth_drv
Huawei BMA: Adding Huawei BMA driver: host_kbox_drv
Thomas Gleixner (1):
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
Tom Rix (1):
net: hns: fix variable used when DEBUG is defined
Wenwen Wang (1):
locks: fix a memory leak bug in __break_lease()
Xu Wang (1):
net: hns: use eth_broadcast_addr() to assign broadcast address
Yang Yingliang (1):
configs: add config BMA to config files
Yonglong Liu (5):
net: hns: remove redundant variable initialization
net: hns: fix non-promiscuous mode does not take effect problem
net: hns: fix ping failed when setting "autoneg off speed 100 duplex
half"
net: hns: fix wrong display of "Advertised link modes"
net: hns: update hns version to 21.2.1
YueHaibing (1):
net: hns: Remove unused macro AE_NAME_PORT_ID_IDX
Zheng Yongjun (1):
hisilicon/hns: convert comma to semicolon
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/configs/syzkaller_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/net/ethernet/hisilicon/hip04_eth.c | 6 +-
drivers/net/ethernet/hisilicon/hix5hd2_gmac.c | 6 +-
drivers/net/ethernet/hisilicon/hns/hnae.c | 9 +-
drivers/net/ethernet/hisilicon/hns/hnae.h | 8 +-
.../net/ethernet/hisilicon/hns/hns_ae_adapt.c | 10 +-
.../ethernet/hisilicon/hns/hns_dsaf_gmac.c | 6 +-
.../ethernet/hisilicon/hns/hns_dsaf_gmac.h | 6 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_mac.c | 15 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_mac.h | 6 +-
.../ethernet/hisilicon/hns/hns_dsaf_main.c | 42 +-
.../ethernet/hisilicon/hns/hns_dsaf_main.h | 7 +-
.../ethernet/hisilicon/hns/hns_dsaf_misc.c | 18 +-
.../ethernet/hisilicon/hns/hns_dsaf_misc.h | 6 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_ppe.c | 6 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_ppe.h | 8 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_rcb.c | 10 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_rcb.h | 8 +-
.../net/ethernet/hisilicon/hns/hns_dsaf_reg.h | 8 +-
.../ethernet/hisilicon/hns/hns_dsaf_xgmac.c | 8 +-
.../ethernet/hisilicon/hns/hns_dsaf_xgmac.h | 6 +-
drivers/net/ethernet/hisilicon/hns/hns_enet.c | 16 +-
drivers/net/ethernet/hisilicon/hns/hns_enet.h | 6 +-
.../net/ethernet/hisilicon/hns/hns_ethtool.c | 17 +-
.../net/ethernet/hisilicon/hns3/hclge_mbx.h | 3 +
.../hisilicon/hns3/hns3pf/hclge_main.c | 36 +-
.../hisilicon/hns3/hns3pf/hclge_main.h | 1 +
.../hisilicon/hns3/hns3pf/hclge_mbx.c | 14 +-
drivers/net/ethernet/hisilicon/hns_mdio.c | 12 +-
drivers/net/ethernet/huawei/Kconfig | 1 +
drivers/net/ethernet/huawei/Makefile | 1 +
drivers/net/ethernet/huawei/bma/Kconfig | 10 +
drivers/net/ethernet/huawei/bma/Makefile | 9 +
.../net/ethernet/huawei/bma/cdev_drv/Makefile | 2 +
.../ethernet/huawei/bma/cdev_drv/bma_cdev.c | 369 +++
.../huawei/bma/cdev_veth_drv/Makefile | 2 +
.../bma/cdev_veth_drv/virtual_cdev_eth_net.c | 1862 ++++++++++++
.../bma/cdev_veth_drv/virtual_cdev_eth_net.h | 299 ++
.../net/ethernet/huawei/bma/edma_drv/Makefile | 2 +
.../huawei/bma/edma_drv/bma_devintf.c | 597 ++++
.../huawei/bma/edma_drv/bma_devintf.h | 40 +
.../huawei/bma/edma_drv/bma_include.h | 116 +
.../ethernet/huawei/bma/edma_drv/bma_pci.c | 533 ++++
.../ethernet/huawei/bma/edma_drv/bma_pci.h | 94 +
.../ethernet/huawei/bma/edma_drv/edma_host.c | 1462 ++++++++++
.../ethernet/huawei/bma/edma_drv/edma_host.h | 351 +++
.../huawei/bma/include/bma_ker_intf.h | 94 +
.../net/ethernet/huawei/bma/kbox_drv/Makefile | 5 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.c | 121 +
.../ethernet/huawei/bma/kbox_drv/kbox_dump.h | 33 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.c | 101 +
.../ethernet/huawei/bma/kbox_drv/kbox_hook.h | 33 +
.../huawei/bma/kbox_drv/kbox_include.h | 40 +
.../ethernet/huawei/bma/kbox_drv/kbox_main.c | 168 ++
.../ethernet/huawei/bma/kbox_drv/kbox_main.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_mce.c | 264 ++
.../ethernet/huawei/bma/kbox_drv/kbox_mce.h | 23 +
.../ethernet/huawei/bma/kbox_drv/kbox_panic.c | 187 ++
.../ethernet/huawei/bma/kbox_drv/kbox_panic.h | 25 +
.../huawei/bma/kbox_drv/kbox_printk.c | 363 +++
.../huawei/bma/kbox_drv/kbox_printk.h | 33 +
.../huawei/bma/kbox_drv/kbox_ram_drive.c | 188 ++
.../huawei/bma/kbox_drv/kbox_ram_drive.h | 31 +
.../huawei/bma/kbox_drv/kbox_ram_image.c | 135 +
.../huawei/bma/kbox_drv/kbox_ram_image.h | 84 +
.../huawei/bma/kbox_drv/kbox_ram_op.c | 986 +++++++
.../huawei/bma/kbox_drv/kbox_ram_op.h | 77 +
.../net/ethernet/huawei/bma/veth_drv/Makefile | 2 +
.../ethernet/huawei/bma/veth_drv/veth_hb.c | 2502 +++++++++++++++++
.../ethernet/huawei/bma/veth_drv/veth_hb.h | 440 +++
.../net/ethernet/huawei/hinic/hinic_ethtool.c | 2 +
.../net/ethernet/huawei/hinic/hinic_main.c | 123 +-
.../net/ethernet/huawei/hinic/hinic_nic_cfg.c | 7 +
.../net/ethernet/huawei/hinic/hinic_nic_dev.h | 7 +-
.../ethernet/huawei/hinic/hinic_port_cmd.h | 7 +
drivers/net/ethernet/huawei/hinic/hinic_rx.c | 89 +
drivers/net/ethernet/huawei/hinic/hinic_rx.h | 3 +
drivers/scsi/ses.c | 18 +-
fs/locks.c | 3 +-
83 files changed, 12077 insertions(+), 199 deletions(-)
create mode 100644 drivers/net/ethernet/huawei/bma/Kconfig
create mode 100644 drivers/net/ethernet/huawei/bma/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_drv/bma_cdev.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.c
create mode 100644 drivers/net/ethernet/huawei/bma/cdev_veth_drv/virtual_cdev_eth_net.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_devintf.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/bma_pci.h
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.c
create mode 100644 drivers/net/ethernet/huawei/bma/edma_drv/edma_host.h
create mode 100644 drivers/net/ethernet/huawei/bma/include/bma_ker_intf.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_dump.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_hook.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_include.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_main.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_mce.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_panic.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_printk.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_drive.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_image.h
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.c
create mode 100644 drivers/net/ethernet/huawei/bma/kbox_drv/kbox_ram_op.h
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/Makefile
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.c
create mode 100644 drivers/net/ethernet/huawei/bma/veth_drv/veth_hb.h
--
2.25.1
1
28

[PATCH kernel-4.19] x86/config: enable some performance or security features
by Cheng Jian 01 Apr '21
by Cheng Jian 01 Apr '21
01 Apr '21
hulk inclusion
category: bugfix
bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=20
CVE: NA
-------------------------------------------------
Some features are recommended to enable for openEuler X86.
It's good for performance or security.
enable the following features this time:
CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_SHRINK_PAGECACHE=y
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
arch/x86/configs/openeuler_defconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index dcd6f6a310fd..77ac3dd96f66 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -380,7 +380,7 @@ CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
CONFIG_NUMA=y
-# CONFIG_NUMA_AWARE_SPINLOCKS is not set
+CONFIG_NUMA_AWARE_SPINLOCKS=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
@@ -1004,7 +1004,7 @@ CONFIG_THP_SWAP=y
CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
-# CONFIG_SHRINK_PAGECACHE is not set
+CONFIG_SHRINK_PAGECACHE=y
# CONFIG_CMA is not set
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
--
2.25.1
1
0
openEuler x86 config update
Cheng Jian (2):
x86/config: enable files cgroup
x86/config: enable some performance or security features
arch/x86/configs/openeuler_defconfig | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--
2.25.1
1
2
From: chenjiajun <chenjiajun8(a)huawei.com>
virt inclusion
category: feature
bugzilla: 46853
CVE: NA
some improvement for vcpu_stat:
1. Add fastpath kvm exit handles to vcpu_stat, avoid the big deviation
between the statistical data and the actual scene.
2. Export preemption timer kvm exits to statistical data.
--
ChangeList:
v2:
move preemption timer exits to the end of vcpu_stat
v1:
kvm: debugfs: some improvement for vcpu_stat
chenjiajun (2):
kvm: debugfs: add fastpath msr_wr exits to debugfs statistics
kvm: debugfs: add EXIT_REASON_PREEMPTION_TIMER to vcpu_stat
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx/vmx.c | 3 +++
arch/x86/kvm/x86.c | 1 +
3 files changed, 5 insertions(+)
--
2.29.GIT
1
2
From: chenjiajun <chenjiajun8(a)huawei.com>
virt inclusion
category: feature
bugzilla: 46853
CVE: NA
some improvement for vcpu_stat:
1. Add fastpath kvm exit handles to vcpu_stat, avoid the big deviation
between the statistical data and the actual scene.
2. Export preemption timer kvm exits to statistical data.
chenjiajun (2):
kvm: debugfs: add fastpath msr_wr exits to debugfs statistics
kvm: debugfs: add EXIT_REASON_PREEMPTION_TIMER to vcpu_stat
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx/vmx.c | 3 +++
arch/x86/kvm/x86.c | 1 +
3 files changed, 5 insertions(+)
--
2.29.GIT
1
2

[PATCH kernel-4.19 v2 6/6] x86/cpu: Add detect extended topology for Zhaoxin CPUs
by LeoLiuoc 30 Mar '21
by LeoLiuoc 30 Mar '21
30 Mar '21
Detect the extended topology information of Zhaoxin CPUs if available.
The patch is scheduled to be submitted to the kernel mainline in 2021.
---
v1->v2:
- Fix a code logic issue.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 20 +++++++++++++++++++-
arch/x86/kernel/cpu/zhaoxin.c | 7 ++++++-
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index 8735be464bc1..608b8dfa119f 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -115,6 +115,21 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
}
+
+ if (c->cpuid_level >= 0x00000001) {
+ u32 eax, ebx, ecx, edx;
+
+ cpuid(0x00000001, &eax, &ebx, &ecx, &edx);
+ /*
+ * If HTT (EDX[28]) is set EBX[16:23] contain the number of
+ * apicids which are reserved per package. Store the resulting
+ * shift value for the package management code.
+ */
+ if (edx & (1U << 28))
+ c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
+ }
+ if (detect_extended_topology_early(c) < 0)
+ detect_ht_early(c);
}
static void centaur_detect_vmx_virtcap(struct cpuinfo_x86 *c)
@@ -158,11 +173,14 @@ static void init_centaur(struct cpuinfo_x86 *c)
clear_cpu_cap(c, 0*32+31);
#endif
early_init_centaur(c);
+ detect_extended_topology(c);
init_intel_cacheinfo(c);
- detect_num_cpu_cores(c);
+ if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
+ detect_num_cpu_cores(c);
#ifdef CONFIG_X86_32
detect_ht(c);
#endif
+ }
if (c->cpuid_level > 9) {
unsigned int eax = cpuid_eax(10);
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 452fd0a6bc61..e4ed34361a1f 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -85,6 +85,8 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c)
c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
}
+ if (detect_extended_topology_early(c) < 0)
+ detect_ht_early(c);
}
static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c)
@@ -115,11 +117,14 @@ static void zhaoxin_detect_vmx_virtcap(struct
cpuinfo_x86 *c)
static void init_zhaoxin(struct cpuinfo_x86 *c)
{
early_init_zhaoxin(c);
+ detect_extended_topology(c);
init_intel_cacheinfo(c);
- detect_num_cpu_cores(c);
+ if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
+ detect_num_cpu_cores(c);
#ifdef CONFIG_X86_32
detect_ht(c);
#endif
+ }
if (c->cpuid_level > 9) {
unsigned int eax = cpuid_eax(10);
--
2.20.1
1
0

30 Mar '21
Add Zhaoxin feature bits on Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/include/asm/cpufeatures.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index f7f9604b10cc..48535113efa6 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -145,8 +145,12 @@
#define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a
hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001,
word 5 */
+#define X86_FEATURE_SM2 (5*32+0) /* sm2 present*/
+#define X86_FEATURE_SM2_EN (5*32+1) /* sm2 enabled */
#define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present
(xstore) */
#define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */
+#define X86_FEATURE_CCS (5*32+4) /* "sm3 sm4" present */
+#define X86_FEATURE_CCS_EN (5*32+5) /* "sm3_en sm4_en" enabled */
#define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto
(xcrypt) */
#define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU
crypto enabled */
#define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography
Engine v2 */
@@ -155,6 +159,23 @@
#define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */
#define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery
Multiplier */
#define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */
+#define X86_FEATURE_ZX_FMA (5*32+15) /* FMA supported */
+#define X86_FEATURE_PARALLAX (5*32+16) /* Adaptive P-state control
present */
+#define X86_FEATURE_PARALLAX_EN (5*32+17) /* Adaptive P-state control
enabled */
+#define X86_FEATURE_OVERSTRESS (5*32+18) /* Overstress Feature for
auto overclock present */
+#define X86_FEATURE_OVERSTRESS_EN (5*32+19) /* Overstress Feature for
auto overclock enabled */
+#define X86_FEATURE_TM3 (5*32+20) /* Thermal Monitor 3 present */
+#define X86_FEATURE_TM3_EN (5*32+21) /* Thermal Monitor 3 enabled */
+#define X86_FEATURE_RNG2 (5*32+22) /* 2nd generation of RNG
present */
+#define X86_FEATURE_RNG2_EN (5*32+23) /* 2nd generation of RNG
enabled */
+#define X86_FEATURE_SEM (5*32+24) /* SME feature present */
+#define X86_FEATURE_PHE2 (5*32+25) /* SHA384 and SHA 512 present */
+#define X86_FEATURE_PHE2_EN (5*32+26) /* SHA384 and SHA 512 enabled */
+#define X86_FEATURE_XMODX (5*32+27) /* "rsa" XMODEXP and MONTMUL2
instructions are present */
+#define X86_FEATURE_XMODX_EN (5*32+28) /* "rsa_en" XMODEXP and
MONTMUL2instructions are enabled */
+#define X86_FEATURE_VEX (5*32+29) /* VEX instructions are present */
+#define X86_FEATURE_VEX_EN (5*32+30) /* VEX instructions are
enabled */
+#define X86_FEATURE_STK (5*32+31) /* STK are present */
/* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */
#define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */
--
2.20.1
1
0

[PATCH kernel-4.19 v2 4/6] x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
by LeoLiuoc 30 Mar '21
by LeoLiuoc 30 Mar '21
30 Mar '21
mainline inclusion
from mainline-5.9
commit 33b4711df4c1b3aec7c267c60fc24abccfadd40c
category: x86/cpu
--------------------------------
Add Centaur family >=7 CPUs specific initialization support.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1599562666-31351-3-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index b3be281334e4..8735be464bc1 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -71,6 +71,9 @@ static void init_c3(struct cpuinfo_x86 *c)
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
+
+ if (c->x86 >= 7)
+ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
enum {
@@ -101,7 +104,8 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
if (c->x86 == 5)
set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR);
#endif
- if (c->x86 == 6 && c->x86_model >= 0xf)
+ if ((c->x86 == 6 && c->x86_model >= 0xf) ||
+ (c->x86 >= 7))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
#ifdef CONFIG_X86_64
@@ -235,7 +239,7 @@ static void init_centaur(struct cpuinfo_x86 *c)
sprintf(c->x86_model_id, "WinChip %s", name);
}
#endif
- if (c->x86 == 6)
+ if (c->x86 == 6 || c->x86 >= 7)
init_c3(c);
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
--
2.20.1
1
0

[PATCH kernel-4.19 v2 3/6] x86/cpu/centaur: Replace two-condition switch-case with an if statement
by LeoLiuoc 30 Mar '21
by LeoLiuoc 30 Mar '21
30 Mar '21
mainline inclusion
from mainline-5.9
commit 8687bdc04128b2bd16faaae11db10128ad0da7b8
category: x86/cpu
--------------------------------
Use a normal if statements instead of a two-condition switch-case.
[ bp: Massage commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1599562666-31351-2-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index b98529e50d6f..b3be281334e4 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -96,18 +96,14 @@ enum {
static void early_init_centaur(struct cpuinfo_x86 *c)
{
- switch (c->x86) {
#ifdef CONFIG_X86_32
- case 5:
- /* Emulate MTRRs using Centaur's MCR. */
+ /* Emulate MTRRs using Centaur's MCR. */
+ if (c->x86 == 5)
set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR);
- break;
#endif
- case 6:
- if (c->x86_model >= 0xf)
- set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
- break;
- }
+ if (c->x86 == 6 && c->x86_model >= 0xf)
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#endif
@@ -176,9 +172,8 @@ static void init_centaur(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
}
- switch (c->x86) {
#ifdef CONFIG_X86_32
- case 5:
+ if (c->x86 == 5) {
switch (c->x86_model) {
case 4:
name = "C6";
@@ -238,12 +233,10 @@ static void init_centaur(struct cpuinfo_x86 *c)
c->x86_cache_size = (cc>>24)+(dd>>24);
}
sprintf(c->x86_model_id, "WinChip %s", name);
- break;
+ }
#endif
- case 6:
+ if (c->x86 == 6)
init_c3(c);
- break;
- }
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
#endif
--
2.20.1
1
0

[PATCH kernel-4.19 v2 2/6] x86/cpu: Remove redundant cpu_detect_cache_sizes() call
by LeoLiuoc 30 Mar '21
by LeoLiuoc 30 Mar '21
30 Mar '21
mainline inclusion
from mainline-5.6
commit 283bab9809786cf41798512f5c1e97f4b679ba96
category: x86/cpu
--------------------------------
Both functions call init_intel_cacheinfo() which computes L2 and L3
cache
sizes from CPUID(4). But then they also call cpu_detect_cache_sizes() a
bit later which computes ->x86_tlbsize and L2 size from CPUID(80000006).
However, the latter call is not needed because
- on these CPUs, CPUID(80000006).EBX for ->x86_tlbsize is reserved
- CPUID(80000006).ECX for the L2 size has the same result as CPUID(4)
Therefore, remove the latter call to simplify the code.
[ bp: Rewrite commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1579075257-6985-1-git-send-email-TonyWWang-oc@zha….
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 2 --
arch/x86/kernel/cpu/zhaoxin.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index 14433ff5b828..b98529e50d6f 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -71,8 +71,6 @@ static void init_c3(struct cpuinfo_x86 *c)
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
-
- cpu_detect_cache_sizes(c);
}
enum {
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 8e6f2f4b4afe..452fd0a6bc61 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -58,8 +58,6 @@ static void init_zhaoxin_cap(struct cpuinfo_x86 *c)
if (c->x86 >= 0x6)
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
-
- cpu_detect_cache_sizes(c);
}
static void early_init_zhaoxin(struct cpuinfo_x86 *c)
--
2.20.1
1
0

[PATCH kernel-4.19 v2 1/6] x86/cpu: Create Zhaoxin processors architecture support file
by LeoLiuoc 30 Mar '21
by LeoLiuoc 30 Mar '21
30 Mar '21
mainline inclusion
from mainline-5.2
commit 761fdd5e3327db6c646a09bab5ad48cd42680cd2
category: x86/cpu
--------------------------------
Add x86 architecture support for new Zhaoxin processors.
Carve out initialization code needed by Zhaoxin processors into
a separate compilation unit.
To identify Zhaoxin CPU, add a new vendor type X86_VENDOR_ZHAOXIN
for system recognition.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: "hpa(a)zytor.com" <hpa(a)zytor.com>
Cc: "gregkh(a)linuxfoundation.org" <gregkh(a)linuxfoundation.org>
Cc: "rjw(a)rjwysocki.net" <rjw(a)rjwysocki.net>
Cc: "lenb(a)kernel.org" <lenb(a)kernel.org>
Cc: David Wang <DavidWang(a)zhaoxin.com>
Cc: "Cooper Yan(BJ-RD)" <CooperYan(a)zhaoxin.com>
Cc: "Qiyuan Wang(BJ-RD)" <QiyuanWang(a)zhaoxin.com>
Cc: "Herry Yang(BJ-RD)" <HerryYang(a)zhaoxin.com>
Link:
https://lkml.kernel.org/r/01042674b2f741b2aed1f797359bdffb@zhaoxin.com
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
MAINTAINERS | 6 ++
arch/x86/Kconfig.cpu | 13 +++
arch/x86/include/asm/processor.h | 3 +-
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/zhaoxin.c | 167 +++++++++++++++++++++++++++++++
5 files changed, 189 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
diff --git a/MAINTAINERS b/MAINTAINERS
index ada8fbdd1d71..210fdd54b496 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16265,6 +16265,12 @@ Q:
https://patchwork.linuxtv.org/project/linux-media/list/
S: Maintained
F: drivers/media/dvb-frontends/zd1301_demod*
+ZHAOXIN PROCESSOR SUPPORT
+M: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
+L: linux-kernel(a)vger.kernel.org
+S: Maintained
+F: arch/x86/kernel/cpu/zhaoxin.c
+
ZPOOL COMPRESSED PAGE STORAGE API
M: Dan Streetman <ddstreet(a)ieee.org>
L: linux-mm(a)kvack.org
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 76e274a0fd0a..d1a51794c587 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -480,3 +480,16 @@ config CPU_SUP_UMC_32
CPU might render the kernel unbootable.
If unsure, say N.
+
+config CPU_SUP_ZHAOXIN
+ default y
+ bool "Support Zhaoxin processors" if PROCESSOR_SELECT
+ help
+ This enables detection, tunings and quirks for Zhaoxin processors
+
+ You need this enabled if you want your kernel to run on a
+ Zhaoxin CPU. Disabling this option on other types of CPUs
+ makes the kernel a tiny bit smaller. Disabling it on a Zhaoxin
+ CPU might render the kernel unbootable.
+
+ If unsure, say N.
diff --git a/arch/x86/include/asm/processor.h
b/arch/x86/include/asm/processor.h
index af99d4137db9..e5b9308c312f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -156,7 +156,8 @@ enum cpuid_regs_idx {
#define X86_VENDOR_TRANSMETA 7
#define X86_VENDOR_NSC 8
#define X86_VENDOR_HYGON 9
-#define X86_VENDOR_NUM 10
+#define X86_VENDOR_ZHAOXIN 10
+#define X86_VENDOR_NUM 11
#define X86_VENDOR_UNKNOWN 0xff
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index e46d718ba4cc..69bba2b1ef08 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o
obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o
obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o
obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o
+obj-$(CONFIG_CPU_SUP_ZHAOXIN) += zhaoxin.o
obj-$(CONFIG_INTEL_RDT) += intel_rdt.o intel_rdt_rdtgroup.o
intel_rdt_monitor.o
obj-$(CONFIG_INTEL_RDT) += intel_rdt_ctrlmondata.o
intel_rdt_pseudo_lock.o
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
new file mode 100644
index 000000000000..8e6f2f4b4afe
--- /dev/null
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/sched.h>
+#include <linux/sched/clock.h>
+
+#include <asm/cpufeature.h>
+
+#include "cpu.h"
+
+#define MSR_ZHAOXIN_FCR57 0x00001257
+
+#define ACE_PRESENT (1 << 6)
+#define ACE_ENABLED (1 << 7)
+#define ACE_FCR (1 << 7) /* MSR_ZHAOXIN_FCR */
+
+#define RNG_PRESENT (1 << 2)
+#define RNG_ENABLED (1 << 3)
+#define RNG_ENABLE (1 << 8) /* MSR_ZHAOXIN_RNG */
+
+#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000
+#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000
+#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000
+#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001
+#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002
+#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020
+
+static void init_zhaoxin_cap(struct cpuinfo_x86 *c)
+{
+ u32 lo, hi;
+
+ /* Test for Extended Feature Flags presence */
+ if (cpuid_eax(0xC0000000) >= 0xC0000001) {
+ u32 tmp = cpuid_edx(0xC0000001);
+
+ /* Enable ACE unit, if present and disabled */
+ if ((tmp & (ACE_PRESENT | ACE_ENABLED)) == ACE_PRESENT) {
+ rdmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ /* Enable ACE unit */
+ lo |= ACE_FCR;
+ wrmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ pr_info("CPU: Enabled ACE h/w crypto\n");
+ }
+
+ /* Enable RNG unit, if present and disabled */
+ if ((tmp & (RNG_PRESENT | RNG_ENABLED)) == RNG_PRESENT) {
+ rdmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ /* Enable RNG unit */
+ lo |= RNG_ENABLE;
+ wrmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ pr_info("CPU: Enabled h/w RNG\n");
+ }
+
+ /*
+ * Store Extended Feature Flags as word 5 of the CPU
+ * capability bit array
+ */
+ c->x86_capability[CPUID_C000_0001_EDX] = cpuid_edx(0xC0000001);
+ }
+
+ if (c->x86 >= 0x6)
+ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
+
+ cpu_detect_cache_sizes(c);
+}
+
+static void early_init_zhaoxin(struct cpuinfo_x86 *c)
+{
+ if (c->x86 >= 0x6)
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+#ifdef CONFIG_X86_64
+ set_cpu_cap(c, X86_FEATURE_SYSENTER32);
+#endif
+ if (c->x86_power & (1 << 8)) {
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
+ }
+
+ if (c->cpuid_level >= 0x00000001) {
+ u32 eax, ebx, ecx, edx;
+
+ cpuid(0x00000001, &eax, &ebx, &ecx, &edx);
+ /*
+ * If HTT (EDX[28]) is set EBX[16:23] contain the number of
+ * apicids which are reserved per package. Store the resulting
+ * shift value for the package management code.
+ */
+ if (edx & (1U << 28))
+ c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
+ }
+
+}
+
+static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c)
+{
+ u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2;
+
+ rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high);
+ msr_ctl = vmx_msr_high | vmx_msr_low;
+
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)
+ set_cpu_cap(c, X86_FEATURE_TPR_SHADOW);
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI)
+ set_cpu_cap(c, X86_FEATURE_VNMI);
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) {
+ rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2,
+ vmx_msr_low, vmx_msr_high);
+ msr_ctl2 = vmx_msr_high | vmx_msr_low;
+ if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) &&
+ (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW))
+ set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY);
+ if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT)
+ set_cpu_cap(c, X86_FEATURE_EPT);
+ if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID)
+ set_cpu_cap(c, X86_FEATURE_VPID);
+ }
+}
+
+static void init_zhaoxin(struct cpuinfo_x86 *c)
+{
+ early_init_zhaoxin(c);
+ init_intel_cacheinfo(c);
+ detect_num_cpu_cores(c);
+#ifdef CONFIG_X86_32
+ detect_ht(c);
+#endif
+
+ if (c->cpuid_level > 9) {
+ unsigned int eax = cpuid_eax(10);
+
+ /*
+ * Check for version and the number of counters
+ * Version(eax[7:0]) can't be 0;
+ * Counters(eax[15:8]) should be greater than 1;
+ */
+ if ((eax & 0xff) && (((eax >> 8) & 0xff) > 1))
+ set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
+ }
+
+ if (c->x86 >= 0x6)
+ init_zhaoxin_cap(c);
+#ifdef CONFIG_X86_64
+ set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+#endif
+
+ if (cpu_has(c, X86_FEATURE_VMX))
+ zhaoxin_detect_vmx_virtcap(c);
+}
+
+#ifdef CONFIG_X86_32
+static unsigned int
+zhaoxin_size_cache(struct cpuinfo_x86 *c, unsigned int size)
+{
+ return size;
+}
+#endif
+
+static const struct cpu_dev zhaoxin_cpu_dev = {
+ .c_vendor = "zhaoxin",
+ .c_ident = { " Shanghai " },
+ .c_early_init = early_init_zhaoxin,
+ .c_init = init_zhaoxin,
+#ifdef CONFIG_X86_32
+ .legacy_cache_size = zhaoxin_size_cache,
+#endif
+ .c_x86_vendor = X86_VENDOR_ZHAOXIN,
+};
+
+cpu_dev_register(zhaoxin_cpu_dev);
--
2.20.1
1
0
Zhaoxin have new SB & NB HDAC controller. And have new NB HDAC codec.
This patch set add support for them.
LeoLiu-oc (3):
ALSA: hda: Add support of Zhaoxin SB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC
ALSA: hda: Add support of Zhaoxin NB HDAC codec
sound/pci/hda/hda_controller.c | 17 ++++++++-
sound/pci/hda/hda_controller.h | 2 +
sound/pci/hda/hda_intel.c | 68 +++++++++++++++++++++++++++++++++-
sound/pci/hda/patch_hdmi.c | 26 +++++++++++++
4 files changed, 111 insertions(+), 2 deletions(-)
--
2.20.1
2
1

30 Mar '21
Some Zhaoxin xHCI controllers follow usb3.1 spec,
but only support gen1 speed 5G. While in Linux kernel,
if xHCI suspport usb3.1,root hub speed will show on 10G.
To fix this issue, read usb speed ID supported by xHCI
to determine root hub speed.
The patch is scheduled to be submitted to the kernel mainline in 2021.
v1->v2:
- Use quirks instead of vendor id.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index fad995b5635e..a26d4040a761 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5079,6 +5079,7 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
*/
struct device *dev = hcd->self.sysdev;
unsigned int minor_rev;
+ u8 i, j;
int retval;
/* Accept arbitrarily long scatter-gather lists */
@@ -5133,6 +5134,24 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
break;
}
+
+ /* usb3.1 has gen1 and gen2, Some zx's xHCI controller that follow
usb3.1 spec
+ * but only support gen1
+ */
+ if (xhci->quirks == XHCI_ZHAOXIN_HOST) {
+ minor_rev = 0;
+ for (j = 0; j < xhci->num_port_caps; j++) {
+ for (i = 0; i < xhci->port_caps[j].psi_count; i++) {
+ if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5)
+ minor_rev = 1;
+ }
+ if (minor_rev != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
+ }
+ }
+ }
+
xhci_info(xhci, "Host supports USB 3.%x %sSuperSpeed\n",
minor_rev,
minor_rev ? "Enhanced " : "");
--
2.20.1
3
2

[PATCH kernel-4.19 v2] xhci: fix issue of cross page boundary in TRB prefetch mechanism
by LeoLiu-oc 30 Mar '21
by LeoLiu-oc 30 Mar '21
30 Mar '21
On some Zhaoxin platforms, xHCI will prefetch TRB for performance
improvement. However this TRB prefetch mechanism may cross page boundary,
which may access memory not belong to xHCI. In order to fix this issue,
using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
v1->v2:
- Use quirks instead of vendor id.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci-mem.c | 10 ++++++++--
drivers/usb/host/xhci-pci.c | 5 +++++
drivers/usb/host/xhci.h | 1 +
3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 9e87c282a743..aff1ccb94399 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -2450,8 +2450,14 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
* and our use of dma addresses in the trb_address_map radix tree needs
* TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
*/
- xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
- TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ /*With xHCI TRB prefetch patch:To fix cross page boundry access issue
in IOV environment*/
+ if (xhci->quirks == XHCI_ZHAOXIN_TRB_FETCH) {
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2);
+ } else {
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ }
/* See Table 46 and Note on Figure 55 */
xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 6c6b29901c5e..798b660f2fd0 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -237,6 +237,11 @@ static void xhci_pci_quirks(struct device *dev,
struct xhci_hcd *xhci)
pdev->device == 0x3432)
xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN &&
+ (pdev->device == 0x9202 ||
+ pdev->device == 0x9203))
+ xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
+
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
xhci->quirks |= XHCI_BROKEN_STREAMS;
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 069390a1f2ac..3ae8e25a2622 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1874,6 +1874,7 @@ struct xhci_hcd {
#define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35)
#define XHCI_ZHAOXIN_HOST BIT_ULL(36)
#define XHCI_DISABLE_SPARSE BIT_ULL(38)
+#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(39)
unsigned int num_active_eps;
unsigned int limit_active_eps;
--
2.20.1
3
2

[PATCH kernel-4.19 v2] crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
by LeoLiu-oc 30 Mar '21
by LeoLiu-oc 30 Mar '21
30 Mar '21
The driver crc32c-intel match CPUs supporting X86_FEATURE_XMM4_2.
On platforms with Zhaoxin CPUs supporting this X86 feature,
when crc32c-intel and crc32c-generic are both registered,
system will use crc32c-intel because its .cra_priority is greater
than crc32c-generic.
When doing lmbench3 Create and Delete file test on partitions with
ext4 enabling metadata checksum, found using crc32c-generic driver
could get about 20% performance gain than using the driver
crc32c-intel on some Zhaoxin CPUs.
This case expect to use crc32c-generic driver for these Zhaoxin CPUs
to get performance gain, so remove these Zhaoxin CPUs support from
crc32c-intel.
This patch was submitted to mainline kernel but not accepted by upstream
maintainer whose reason is "Then create a BUG flag for it,".
We think this is not a CPU bug for Zhaoxin CPUs. So should patch the
crc32c driver for Zhaoxin CPUs but not report a BUG.
https://lkml.org/lkml/2020/12/11/308
v1->v2:
- Fix some coding style issues
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/crypto/crc32c-intel_glue.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/x86/crypto/crc32c-intel_glue.c
b/arch/x86/crypto/crc32c-intel_glue.c
index 5773e1161072..168bce79bedd 100644
--- a/arch/x86/crypto/crc32c-intel_glue.c
+++ b/arch/x86/crypto/crc32c-intel_glue.c
@@ -242,8 +242,15 @@ MODULE_DEVICE_TABLE(x86cpu, crc32c_cpu_id);
static int __init crc32c_intel_mod_init(void)
{
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+
if (!x86_match_cpu(crc32c_cpu_id))
return -ENODEV;
+
+ if ((c->x86_vendor == X86_VENDOR_ZHAOXIN || c->x86_vendor ==
X86_VENDOR_CENTAUR) &&
+ (c->x86 <= 7 && c->x86_model <= 59)) {
+ return -ENODEV;
+ }
#ifdef CONFIG_X86_64
if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) {
alg.update = crc32c_pcl_intel_update;
--
2.20.1
2
1

[PATCH kernel-4.19 v2] ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA
by LeoLiu-oc 30 Mar '21
by LeoLiu-oc 30 Mar '21
30 Mar '21
Add Zhaoxin Serial ATA support for Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
v1->v2:
- Fix some coding style issues
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 384 +++++++++++++++++++++++++++++++++++++
3 files changed, 393 insertions(+)
create mode 100644 drivers/ata/sata_zhaoxin.c
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 99698d7fe..78a6338d0 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -494,6 +494,14 @@ config SATA_VITESSE
If unsure, say N.
+config SATA_ZHAOXIN
+ tristate "ZhaoXin SATA support"
+ depends on PCI
+ help
+ This option enables support for ZhaoXin Serial ATA.
+
+ If unsure, say N.
+
comment "PATA SFF controllers with BMDMA"
config PATA_ALI
diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile
index d21cdd83f..2d9220311 100644
--- a/drivers/ata/Makefile
+++ b/drivers/ata/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_SATA_SIL) += sata_sil.o
obj-$(CONFIG_SATA_SIS) += sata_sis.o
obj-$(CONFIG_SATA_SVW) += sata_svw.o
obj-$(CONFIG_SATA_ULI) += sata_uli.o
+obj-$(CONFIG_SATA_ZHAOXIN) += sata_zhaoxin.o
obj-$(CONFIG_SATA_VIA) += sata_via.o
obj-$(CONFIG_SATA_VITESSE) += sata_vsc.o
diff --git a/drivers/ata/sata_zhaoxin.c b/drivers/ata/sata_zhaoxin.c
new file mode 100644
index 000000000..f4a694355
--- /dev/null
+++ b/drivers/ata/sata_zhaoxin.c
@@ -0,0 +1,384 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * sata_zhaoxin.c - ZhaoXin Serial ATA controllers
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_zx"
+#define DRV_VERSION "2.6.1"
+
+enum board_ids_enum {
+ cnd001,
+};
+
+enum {
+ SATA_CHAN_ENAB = 0x40, /* SATA channel enable */
+ SATA_INT_GATE = 0x41, /* SATA interrupt gating */
+ SATA_NATIVE_MODE = 0x42, /* Native mode enable */
+ PATA_UDMA_TIMING = 0xB3, /* PATA timing for DMA/ cable detect */
+ PATA_PIO_TIMING = 0xAB, /* PATA timing register */
+
+ PORT0 = (1 << 1),
+ PORT1 = (1 << 0),
+ ALL_PORTS = PORT0 | PORT1,
+
+ NATIVE_MODE_ALL = (1 << 7) | (1 << 6) | (1 << 5) | (1 << 4),
+
+ SATA_EXT_PHY = (1 << 6), /* 0==use PATA, 1==ext phy */
+};
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent);
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val);
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val);
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
+ unsigned long deadline);
+
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile
*tf);
+
+static const struct pci_device_id szx_pci_tbl[] = {
+ { PCI_VDEVICE(ZHAOXIN, 0x9002), cnd001 },
+ { PCI_VDEVICE(ZHAOXIN, 0x9003), cnd001 },
+
+ { } /* terminate list */
+};
+
+static struct pci_driver szx_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = szx_pci_tbl,
+ .probe = szx_init_one,
+#ifdef CONFIG_PM_SLEEP
+ .suspend = ata_pci_device_suspend,
+ .resume = ata_pci_device_resume,
+#endif
+ .remove = ata_pci_remove_one,
+};
+
+static struct scsi_host_template szx_sht = {
+ ATA_BMDMA_SHT(DRV_NAME),
+};
+
+static struct ata_port_operations szx_base_ops = {
+ .inherits = &ata_bmdma_port_ops,
+ .sff_tf_load = szx_tf_load,
+};
+
+static struct ata_port_operations cnd001_ops = {
+ .inherits = &szx_base_ops,
+ .hardreset = szx_hardreset,
+ .scr_read = cnd001_scr_read,
+ .scr_write = cnd001_scr_write,
+};
+
+static struct ata_port_info cnd001_port_info = {
+ .flags = ATA_FLAG_SATA | ATA_FLAG_SLAVE_POSS,
+ .pio_mask = ATA_PIO4,
+ .mwdma_mask = ATA_MWDMA2,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &cnd001_ops,
+};
+
+
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
+ unsigned long deadline)
+{
+ int rc;
+
+ rc = sata_std_hardreset(link, class, deadline);
+ if (!rc || rc == -EAGAIN) {
+ struct ata_port *ap = link->ap;
+ int pmp = link->pmp;
+ int tmprc;
+
+ if (pmp) {
+ ap->ops->sff_dev_select(ap, pmp);
+ tmprc = ata_sff_wait_ready(&ap->link, deadline);
+ } else {
+ tmprc = ata_sff_wait_ready(link, deadline);
+ }
+ if (tmprc)
+ ata_link_err(link, "COMRESET failed for wait (errno=%d)\n",
+ rc);
+ else
+ ata_link_err(link, "wait for bsy success\n");
+
+ ata_link_err(link, "COMRESET success (errno=%d) ap=%d link %d\n",
+ rc, link->ap->port_no, link->pmp);
+ } else {
+ ata_link_err(link, "COMRESET failed (errno=%d) ap=%d link %d\n",
+ rc, link->ap->port_no, link->pmp);
+ }
+ return rc;
+}
+
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val)
+{
+ static const u8 ipm_tbl[] = { 1, 2, 6, 0 };
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+ u8 raw;
+
+ switch (scr) {
+ case SCR_STATUS:
+ pci_read_config_byte(pdev, 0xA0 + slot, &raw);
+
+ /* read the DET field, bit0 and 1 of the config byte */
+ v |= raw & 0x03;
+
+ /* read the SPD field, bit4 of the configure byte */
+ v |= raw & 0x30;
+
+ /* read the IPM field, bit2 and 3 of the config byte */
+ v |= ((ipm_tbl[(raw >> 2) & 0x3])<<8);
+ break;
+
+ case SCR_ERROR:
+ /* devices other than 5287 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_read_config_dword(pdev, 0xA8, &v);
+ break;
+
+ case SCR_CONTROL:
+ pci_read_config_byte(pdev, 0xA4 + slot, &raw);
+
+ /* read the DET field, bit0 and bit1 */
+ v |= ((raw & 0x02) << 1) | (raw & 0x01);
+
+ /* read the IPM field, bit2 and bit3 */
+ v |= ((raw >> 2) & 0x03) << 8;
+
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ *val = v;
+ return 0;
+}
+
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val)
+{
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+
+ WARN_ON(pdev == NULL);
+
+ switch (scr) {
+ case SCR_ERROR:
+ /* devices 0x9002 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_write_config_dword(pdev, 0xA8, val);
+ return 0;
+
+ case SCR_CONTROL:
+ /* set the DET field */
+ v |= ((val & 0x4) >> 1) | (val & 0x1);
+
+ /* set the IPM field */
+ v |= ((val >> 8) & 0x3) << 2;
+
+
+ pci_write_config_byte(pdev, 0xA4 + slot, v);
+
+
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+/**
+ * szx_tf_load - send taskfile registers to host controller
+ * @ap: Port to which output is sent
+ * @tf: ATA taskfile register set
+ *
+ * Outputs ATA taskfile to standard ATA host controller.
+ *
+ * This is to fix the internal bug of zx chipsets, which will
+ * reset the device register after changing the IEN bit on ctl
+ * register.
+ */
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
+{
+ struct ata_taskfile ttf;
+
+ if (tf->ctl != ap->last_ctl) {
+ ttf = *tf;
+ ttf.flags |= ATA_TFLAG_DEVICE;
+ tf = &ttf;
+ }
+ ata_sff_tf_load(ap, tf);
+}
+
+static const unsigned int szx_bar_sizes[] = {
+ 8, 4, 8, 4, 16, 256
+};
+
+static const unsigned int cnd001_bar_sizes0[] = {
+ 8, 4, 8, 4, 16, 0
+};
+
+static const unsigned int cnd001_bar_sizes1[] = {
+ 8, 4, 0, 0, 16, 0
+};
+
+static int cnd001_prepare_host(struct pci_dev *pdev, struct ata_host
**r_host)
+{
+ const struct ata_port_info *ppi0[] = {
+ &cnd001_port_info, NULL
+ };
+ const struct ata_port_info *ppi1[] = {
+ &cnd001_port_info, &ata_dummy_port_info
+ };
+ struct ata_host *host;
+ int i, rc;
+
+ if (pdev->device == 0x9002)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi0, &host);
+ else if (pdev->device == 0x9003)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi1, &host);
+ else
+ rc = -EINVAL;
+
+ if (rc)
+ return rc;
+
+ *r_host = host;
+
+ /* cnd001 9002 hosts four sata ports as M/S of the two channels */
+ /* cnd001 9003 hosts two sata ports as M/S of the one channel */
+ for (i = 0; i < host->n_ports; i++)
+ ata_slave_link_init(host->ports[i]);
+
+ return 0;
+}
+
+static void szx_configure(struct pci_dev *pdev, int board_id)
+{
+ u8 tmp8;
+
+ pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &tmp8);
+ dev_info(&pdev->dev, "routed to hard irq line %d\n",
+ (int) (tmp8 & 0xf0) == 0xf0 ? 0 : tmp8 & 0x0f);
+
+ /* make sure SATA channels are enabled */
+ pci_read_config_byte(pdev, SATA_CHAN_ENAB, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channels (0x%x)\n",
+ (int)tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_CHAN_ENAB, tmp8);
+ }
+
+ /* make sure interrupts for each channel sent to us */
+ pci_read_config_byte(pdev, SATA_INT_GATE, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channel interrupts (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_INT_GATE, tmp8);
+ }
+
+ /* make sure native mode is enabled */
+ pci_read_config_byte(pdev, SATA_NATIVE_MODE, &tmp8);
+ if ((tmp8 & NATIVE_MODE_ALL) != NATIVE_MODE_ALL) {
+ dev_dbg(&pdev->dev,
+ "enabling SATA channel native mode (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= NATIVE_MODE_ALL;
+ pci_write_config_byte(pdev, SATA_NATIVE_MODE, tmp8);
+ }
+}
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent)
+{
+ unsigned int i;
+ int rc;
+ struct ata_host *host = NULL;
+ int board_id = (int) ent->driver_data;
+ const unsigned int *bar_sizes;
+ int legacy_mode = 0;
+
+ ata_print_version_once(&pdev->dev, DRV_VERSION);
+
+ if (pdev->device == 0x9002 || pdev->device == 0x9003) {
+ if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
+ u8 tmp8, mask;
+
+ /* TODO: What if one channel is in native mode ... */
+ pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
+ mask = (1 << 2) | (1 << 0);
+ if ((tmp8 & mask) != mask)
+ legacy_mode = 1;
+ }
+ if (legacy_mode)
+ return -EINVAL;
+ }
+
+ rc = pcim_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ if (board_id == cnd001 && pdev->device == 0x9002)
+ bar_sizes = &cnd001_bar_sizes0[0];
+ else if (board_id == cnd001 && pdev->device == 0x9003)
+ bar_sizes = &cnd001_bar_sizes1[0];
+ else
+ bar_sizes = &szx_bar_sizes[0];
+
+ for (i = 0; i < ARRAY_SIZE(szx_bar_sizes); i++) {
+ if ((pci_resource_start(pdev, i) == 0) ||
+ (pci_resource_len(pdev, i) < bar_sizes[i])) {
+ if (bar_sizes[i] == 0)
+ continue;
+
+ dev_err(&pdev->dev,
+ "invalid PCI BAR %u (sz 0x%llx, val 0x%llx)\n",
+ i,
+ (unsigned long long)pci_resource_start(pdev, i),
+ (unsigned long long)pci_resource_len(pdev, i));
+
+ return -ENODEV;
+ }
+ }
+
+ switch (board_id) {
+ case cnd001:
+ rc = cnd001_prepare_host(pdev, &host);
+ break;
+ default:
+ rc = -EINVAL;
+ }
+ if (rc)
+ return rc;
+
+ szx_configure(pdev, board_id);
+
+ pci_set_master(pdev);
+ return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
+ IRQF_SHARED, &szx_sht);
+}
+
+module_pci_driver(szx_pci_driver);
+
+MODULE_AUTHOR("Yanchen:YanchenSun@zhaoxin.com");
+MODULE_DESCRIPTION("SCSI low-level driver for ZX SATA controllers");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, szx_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
--
2.20.1
2
1

【Meeting Notice】openEuler kernel sig meeting Time: 2021-04-02 14:00-16:00
by Meeting Book 30 Mar '21
by Meeting Book 30 Mar '21
30 Mar '21
2
1

[PATCH kernel-4.19 v2] USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
by LeoLiuoc 29 Mar '21
by LeoLiuoc 29 Mar '21
29 Mar '21
This bug is found in Zhaoxin platform, but it's a commom code bug.
Fail sequence:
step1: Unbind UHCI controller from native driver;
step2: Bind UHCI controller to vfio-pci, which will put UHCI controller
in one vfio
group's device list and set UHCI's dev->driver_data to struct
vfio-pci(for UHCI)
step3: Unbind EHCI controller from native driver, will try to tell UHCI
native driver
that "I'm removed by set companion_hcd->self.hs_companion to
NULL. However,
companion_hcd get from UHCI's dev->driver_data that has modified
by vfio-pci
already.So, the vfio-pci structure will be damaged!
step4: Bind EHCI controller to vfio-pci driver, which will put EHCI
controller in the
same vfio group as UHCI controller;
... ...
step5: Unbind UHCI controller from vfio-pci, which will delete UHCI from
vfio group'
device list that has been damaged in step 3. So,delete operation
can random
result into a NULL pointer dereference with the below stack dump.
step6: Bind UHCI controller to native driver;
step7: Unbind EHCI controller from vfio-pci, which will try to remove
EHCI controller
from the vfio group;
step8: Bind EHCI controller to native driver;
[ 929.114641] uhci_hcd 0000:00:10.0: remove, state 1 [ 929.114652] usb
usb1: USB disconnect, device number 1 [ 929.114655] usb 1-1: USB
disconnect, device number 2 [ 929.270313] usb 1-2: USB disconnect,
device number 3 [ 929.318404] uhci_hcd 0000:00:10.0: USB bus 1
deregistered [ 929.343029] uhci_hcd 0000:00:10.1: remove, state 4 [
929.343045] usb usb3: USB disconnect, device number 1 [ 929.343685]
uhci_hcd 0000:00:10.1: USB bus 3 deregistered [ 929.369087] ehci-pci
0000:00:10.7: remove, state 4 [ 929.369102] usb usb4: USB disconnect,
device number 1 [ 929.370325] ehci-pci 0000:00:10.7: USB bus 4
deregistered [ 932.398494] BUG: unable to handle kernel NULL pointer
dereference at 0000000000000000 [ 932.398496] PGD 42a67d067 P4D
42a67d067 PUD 42a65f067 PMD 0 [ 932.398502] Oops: 0002 [#2] SMP NOPTI
[ 932.398505] CPU: 2 PID: 7824 Comm: vfio_unbind.sh Tainted: P D
4.19.65-2020051917-rainos #1
[ 932.398506] Hardware name: Shanghai Zhaoxin Semiconductor Co., Ltd.
HX002EH/HX002EH,
BIOS HX002EH0_01_R480_R_200408 04/08/2020 [ 932.398513] RIP:
0010:vfio_device_put+0x31/0xa0 [vfio] [ 932.398515] Code: 89 e5 41 54 53
4c 8b 67 18 48 89 fb 49 8d 74 24 30 e8 e3 0e f3 de
84 c0 74 67 48 8b 53 20 48 8b 43 28 48 8b 7b 18 48 89 42 08
<48> 89 10
48 b8 00 01 00 00 00 00 ad de 48 89 43 20 48 b8 00 02 00 [
932.398516] RSP: 0018:ffffbbfd04cffc18 EFLAGS: 00010202 [ 932.398518]
RAX: 0000000000000000 RBX: ffff92c7ea717880 RCX: 0000000000000000 [
932.398519] RDX: ffff92c7ea713620 RSI: ffff92c7ea713630 RDI:
ffff92c7ea713600 [ 932.398521] RBP: ffffbbfd04cffc28 R08:
ffff92c7f02a8080 R09: ffff92c7efc03980 [ 932.398522] R10:
ffffbbfd04cff9a8 R11: 0000000000000000 R12: ffff92c7ea713600 [
932.398523] R13: ffff92c7ed8bb0a8 R14: ffff92c7ea717880 R15:
0000000000000000 [ 932.398525] FS: 00007f3031500740(0000)
GS:ffff92c7f0280000(0000) knlGS:0000000000000000 [ 932.398526] CS:
0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 932.398527] CR2:
0000000000000000 CR3: 0000000428626004 CR4: 0000000000160ee0 [
932.398528] Call Trace:
[ 932.398534] vfio_del_group_dev+0xe8/0x2a0 [vfio] [ 932.398539] ?
__blocking_notifier_call_chain+0x52/0x60
[ 932.398542] ? do_wait_intr_irq+0x90/0x90 [ 932.398546] ?
iommu_bus_notifier+0x75/0x100 [ 932.398551] vfio_pci_remove+0x20/0xa0
[vfio_pci] [ 932.398554] pci_device_remove+0x3e/0xc0 [ 932.398557]
device_release_driver_internal+0x17a/0x240
[ 932.398560] device_release_driver+0x12/0x20 [ 932.398561]
unbind_store+0xee/0x180 [ 932.398564] drv_attr_store+0x27/0x40 [
932.398567] sysfs_kf_write+0x3c/0x50 [ 932.398568]
kernfs_fop_write+0x125/0x1a0 [ 932.398572] __vfs_write+0x3a/0x190 [
932.398575] ? apparmor_file_permission+0x1a/0x20
[ 932.398577] ? security_file_permission+0x3b/0xc0
[ 932.398581] ? _cond_resched+0x1a/0x50 [ 932.398582]
vfs_write+0xb8/0x1b0 [ 932.398584] ksys_write+0x5c/0xe0 [ 932.398586]
__x64_sys_write+0x1a/0x20 [ 932.398589] do_syscall_64+0x5a/0x110 [
932.398592] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Using virt-manager/qemu to boot guest os, we can see the same fail sequence!
Fix this by determine whether the PCI Driver of
the USB controller is a kernel native driver.
If not, do not let it modify UHCI's dev->driver_data.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/core/hcd-pci.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
index 7537681355f6..c3cddaab708d 100644
--- a/drivers/usb/core/hcd-pci.c
+++ b/drivers/usb/core/hcd-pci.c
@@ -49,6 +49,7 @@ static void for_each_companion(struct pci_dev *pdev,
struct usb_hcd *hcd,
struct pci_dev *companion;
struct usb_hcd *companion_hcd;
unsigned int slot = PCI_SLOT(pdev->devfn);
+ struct pci_driver *drv;
/*
* Iterate through other PCI functions in the same slot.
@@ -61,6 +62,15 @@ static void for_each_companion(struct pci_dev *pdev,
struct usb_hcd *hcd,
PCI_SLOT(companion->devfn) != slot)
continue;
+ drv = companion->driver;
+ if (!drv)
+ continue;
+
+ if (strncmp(drv->name, "uhci_hcd", sizeof("uhci_hcd") - 1) &&
+ strncmp(drv->name, "ooci_hcd", sizeof("uhci_hcd") - 1) &&
+ strncmp(drv->name, "ehci_hcd", sizeof("uhci_hcd") - 1))
+ continue;
+
/*
* Companion device should be either UHCI,OHCI or EHCI host
* controller, otherwise skip.
--
2.20.1
1
0

29 Mar '21
During polling phase after some device plug in type-c port,
if polling timeout three times, link state will goto inactive.
However this event not handle by the driver, which will cause
device can't be recognized. To fix this issue, if port link state
detected in inactive, record this event that will trigger a warm reset
to bring device identified by driver.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/core/hub.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index fa28f23a4a33..302caa1ea345 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -1135,6 +1135,11 @@ static void hub_activate(struct usb_hub *hub,
enum hub_activation_type type)
USB_SS_PORT_LS_POLLING))
need_debounce_delay = true;
+ /* Make sure a warm-reset request is handled by port_event */
+ if (type == HUB_RESUME &&
+ hub_port_warm_reset_required(hub, port1, portstatus))
+ set_bit(port1, hub->event_bits);
+
/* Clear status-change flags; we'll debounce later */
if (portchange & USB_PORT_STAT_C_CONNECTION) {
need_debounce_delay = true;
--
2.20.1
3
5

29 Mar '21
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3D58V
CVE: NA
----------------------------------
No unlock operation is performed on the mpam_devices_lock before the return statement, which may lead to a deadlock.
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Jian Cheng <cj.chengjian(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_device.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c
index fc7aa1ae0b82..c08ff933db5a 100644
--- a/arch/arm64/kernel/mpam/mpam_device.c
+++ b/arch/arm64/kernel/mpam/mpam_device.c
@@ -560,8 +560,10 @@ static void __init mpam_enable(struct work_struct *work)
mutex_lock(&mpam_devices_lock);
mpam_enable_squash_features();
err = mpam_allocate_config();
- if (err)
+ if (err) {
+ mutex_unlock(&mpam_devices_lock);
return;
+ }
mutex_unlock(&mpam_devices_lock);
mpam_enable_irqs();
--
2.25.1
4
3
订阅
2
1
help
1
0

[PATCH kernel-4.19] USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci
by LeoLiu-oc 29 Mar '21
by LeoLiu-oc 29 Mar '21
29 Mar '21
This bug is found in Zhaoxin platform, but it's a commom code bug.
Fail sequence:
step1: Unbind UHCI controller from native driver;
step2: Bind UHCI controller to vfio-pci, which will put UHCI controller
in one vfio
group's device list and set UHCI's dev->driver_data to struct
vfio-pci(for UHCI)
step3: Unbind EHCI controller from native driver, will try to tell UHCI
native driver
that "I'm removed by set companion_hcd->self.hs_companion to
NULL. However,
companion_hcd get from UHCI's dev->driver_data that has modified
by vfio-pci
already.So, the vfio-pci structure will be damaged!
step4: Bind EHCI controller to vfio-pci driver, which will put EHCI
controller in the
same vfio group as UHCI controller;
... ...
step5: Unbind UHCI controller from vfio-pci, which will delete UHCI from
vfio group'
device list that has been damaged in step 3. So,delete operation
can random
result into a NULL pointer dereference with the below stack dump.
step6: Bind UHCI controller to native driver;
step7: Unbind EHCI controller from vfio-pci, which will try to remove
EHCI controller
from the vfio group;
step8: Bind EHCI controller to native driver;
[ 929.114641] uhci_hcd 0000:00:10.0: remove, state 1 [ 929.114652] usb
usb1: USB disconnect, device number 1 [ 929.114655] usb 1-1: USB
disconnect, device number 2 [ 929.270313] usb 1-2: USB disconnect,
device number 3 [ 929.318404] uhci_hcd 0000:00:10.0: USB bus 1
deregistered [ 929.343029] uhci_hcd 0000:00:10.1: remove, state 4 [
929.343045] usb usb3: USB disconnect, device number 1 [ 929.343685]
uhci_hcd 0000:00:10.1: USB bus 3 deregistered [ 929.369087] ehci-pci
0000:00:10.7: remove, state 4 [ 929.369102] usb usb4: USB disconnect,
device number 1 [ 929.370325] ehci-pci 0000:00:10.7: USB bus 4
deregistered [ 932.398494] BUG: unable to handle kernel NULL pointer
dereference at 0000000000000000 [ 932.398496] PGD 42a67d067 P4D
42a67d067 PUD 42a65f067 PMD 0 [ 932.398502] Oops: 0002 [#2] SMP NOPTI
[ 932.398505] CPU: 2 PID: 7824 Comm: vfio_unbind.sh Tainted: P D
4.19.65-2020051917-rainos #1
[ 932.398506] Hardware name: Shanghai Zhaoxin Semiconductor Co., Ltd.
HX002EH/HX002EH,
BIOS HX002EH0_01_R480_R_200408 04/08/2020 [ 932.398513] RIP:
0010:vfio_device_put+0x31/0xa0 [vfio] [ 932.398515] Code: 89 e5 41 54
53 4c 8b 67 18 48 89 fb 49 8d 74 24 30 e8 e3 0e f3 de
84 c0 74 67 48 8b 53 20 48 8b 43 28 48 8b 7b 18 48 89 42 08 <48> 89 10
48 b8 00 01 00 00 00 00 ad de 48 89 43 20 48 b8 00 02 00 [ 932.398516]
RSP: 0018:ffffbbfd04cffc18 EFLAGS: 00010202 [ 932.398518] RAX:
0000000000000000 RBX: ffff92c7ea717880 RCX: 0000000000000000 [
932.398519] RDX: ffff92c7ea713620 RSI: ffff92c7ea713630 RDI:
ffff92c7ea713600 [ 932.398521] RBP: ffffbbfd04cffc28 R08:
ffff92c7f02a8080 R09: ffff92c7efc03980 [ 932.398522] R10:
ffffbbfd04cff9a8 R11: 0000000000000000 R12: ffff92c7ea713600 [
932.398523] R13: ffff92c7ed8bb0a8 R14: ffff92c7ea717880 R15:
0000000000000000 [ 932.398525] FS: 00007f3031500740(0000)
GS:ffff92c7f0280000(0000) knlGS:0000000000000000 [ 932.398526] CS: 0010
DS: 0000 ES: 0000 CR0: 0000000080050033 [ 932.398527] CR2:
0000000000000000 CR3: 0000000428626004 CR4: 0000000000160ee0 [
932.398528] Call Trace:
[ 932.398534] vfio_del_group_dev+0xe8/0x2a0 [vfio] [ 932.398539] ?
__blocking_notifier_call_chain+0x52/0x60
[ 932.398542] ? do_wait_intr_irq+0x90/0x90 [ 932.398546] ?
iommu_bus_notifier+0x75/0x100 [ 932.398551] vfio_pci_remove+0x20/0xa0
[vfio_pci] [ 932.398554] pci_device_remove+0x3e/0xc0 [ 932.398557]
device_release_driver_internal+0x17a/0x240
[ 932.398560] device_release_driver+0x12/0x20 [ 932.398561]
unbind_store+0xee/0x180 [ 932.398564] drv_attr_store+0x27/0x40 [
932.398567] sysfs_kf_write+0x3c/0x50 [ 932.398568]
kernfs_fop_write+0x125/0x1a0 [ 932.398572] __vfs_write+0x3a/0x190 [
932.398575] ? apparmor_file_permission+0x1a/0x20
[ 932.398577] ? security_file_permission+0x3b/0xc0
[ 932.398581] ? _cond_resched+0x1a/0x50 [ 932.398582]
vfs_write+0xb8/0x1b0 [ 932.398584] ksys_write+0x5c/0xe0 [ 932.398586]
__x64_sys_write+0x1a/0x20 [ 932.398589] do_syscall_64+0x5a/0x110 [
932.398592] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Using virt-manager/qemu to boot guest os, we can see the same fail sequence!
Fix this by check for UHCI driver loaded or not before modifiy UHCI's
dev->driver_data, which will happen in ehci native driver probe/remove.
This patch was submitted to mainline kernel but not accepted by upstream
maintainer whose reason is "Given that it's currently needed in only one
place, it seems reasonable to leave this as a "gentlemen's agreement" in
userspace for the time being instead of adding it to the kernel."
We think the kernel driver should fix this bug regardless of userspaces
behavior.
https://lkml.org/lkml/2020/7/22/493
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/core/hcd-pci.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
index 7537681355f6..57ac942acf12 100644
--- a/drivers/usb/core/hcd-pci.c
+++ b/drivers/usb/core/hcd-pci.c
@@ -34,6 +34,8 @@ static DECLARE_RWSEM(companions_rwsem);
#define CL_OHCI PCI_CLASS_SERIAL_USB_OHCI
#define CL_EHCI PCI_CLASS_SERIAL_USB_EHCI
+#define PCI_DEV_DRV_FLAG 2
+
static inline int is_ohci_or_uhci(struct pci_dev *pdev)
{
return pdev->class == CL_OHCI || pdev->class == CL_UHCI;
@@ -69,6 +71,9 @@ static void for_each_companion(struct pci_dev *pdev,
struct usb_hcd *hcd,
companion->class != CL_EHCI)
continue;
+ if (!(companion->priv_flags & PCI_DEV_DRV_FLAG))
+ continue;
+
companion_hcd = pci_get_drvdata(companion);
if (!companion_hcd || !companion_hcd->self.root_hub)
continue;
@@ -253,6 +258,7 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const
struct pci_device_id *id)
}
pci_set_master(dev);
+ dev->priv_flags |= PCI_DEV_DRV_FLAG;
/* Note: dev_set_drvdata must be called while holding the rwsem */
if (dev->class == CL_EHCI) {
@@ -325,6 +331,7 @@ void usb_hcd_pci_remove(struct pci_dev *dev)
local_irq_disable();
usb_hcd_irq(0, hcd);
local_irq_enable();
+ dev->priv_flags &= ~PCI_DEV_DRV_FLAG;
/* Note: dev_set_drvdata must be called while holding the rwsem */
if (dev->class == CL_EHCI) {
--
2.20.1
3
4

29 Mar '21
bugfix for openEuler 20.03 @20210329
*** BLURB HERE ***
Dan Carpenter (2):
net/x25: prevent a couple of overflows
staging: rtl8188eu: prevent ->ssid overflow in rtw_wx_set_scan()
Dave Airlie (1):
drm/ttm/nouveau: don't call tt destroy callback on alloc failure.
Filipe Manana (1):
btrfs: fix race when cloning extent buffer during rewind of an old
root
Kan Liang (1):
perf/x86/intel: Fix a crash caused by zero PEBS status
Li ZhiGang (1):
staging: TCM: add GMJS(Nationz Tech) TCM driver.
Liu Shixin (1):
mm/vmscan: fix uncleaned mem_cgroup_uncharge
Lu Jialin (1):
cgroup: Fix kabi broken by files_cgroup introduced
Piotr Krysiuk (2):
bpf: Prohibit alu ops for pointer types not defining ptr_limit
bpf: Fix off-by-one for area size in creating mask to left
Tyrel Datwyler (1):
PCI: rpadlpar: Fix potential drc_name corruption in store functions
Yang Yingliang (1):
config: enable config TXGBE by default
Zhang Ming (1):
arm64/mpam: fix a possible deadlock in mpam_enable
Zhen Lei (1):
config: arm64: build TCM driver to modules by default
zhenpengzheng (2):
net: txgbe: Add support for Netswift 10G NIC
x86/config: Set CONFIG_TXGBE=m by default
arch/arm64/configs/hulk_defconfig | 2 +
arch/arm64/configs/openeuler_defconfig | 3 +
arch/arm64/kernel/mpam/mpam_device.c | 4 +-
arch/x86/configs/openeuler_defconfig | 2 +
arch/x86/events/intel/ds.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_sgdma.c | 9 +-
drivers/gpu/drm/ttm/ttm_tt.c | 3 -
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/netswift/Kconfig | 20 +
drivers/net/ethernet/netswift/Makefile | 6 +
drivers/net/ethernet/netswift/txgbe/Kconfig | 13 +
drivers/net/ethernet/netswift/txgbe/Makefile | 11 +
drivers/net/ethernet/netswift/txgbe/txgbe.h | 1260 +++
.../net/ethernet/netswift/txgbe/txgbe_bp.c | 875 ++
.../net/ethernet/netswift/txgbe/txgbe_bp.h | 41 +
.../net/ethernet/netswift/txgbe/txgbe_dcb.h | 30 +
.../ethernet/netswift/txgbe/txgbe_ethtool.c | 3381 +++++++
.../net/ethernet/netswift/txgbe/txgbe_hw.c | 7072 +++++++++++++++
.../net/ethernet/netswift/txgbe/txgbe_hw.h | 264 +
.../net/ethernet/netswift/txgbe/txgbe_lib.c | 959 ++
.../net/ethernet/netswift/txgbe/txgbe_main.c | 8045 +++++++++++++++++
.../net/ethernet/netswift/txgbe/txgbe_mbx.c | 399 +
.../net/ethernet/netswift/txgbe/txgbe_mbx.h | 171 +
.../net/ethernet/netswift/txgbe/txgbe_mtd.c | 1366 +++
.../net/ethernet/netswift/txgbe/txgbe_mtd.h | 1540 ++++
.../net/ethernet/netswift/txgbe/txgbe_param.c | 1191 +++
.../net/ethernet/netswift/txgbe/txgbe_phy.c | 1014 +++
.../net/ethernet/netswift/txgbe/txgbe_phy.h | 190 +
.../net/ethernet/netswift/txgbe/txgbe_ptp.c | 884 ++
.../net/ethernet/netswift/txgbe/txgbe_type.h | 3213 +++++++
drivers/pci/hotplug/rpadlpar_sysfs.c | 14 +-
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
drivers/staging/gmjstcm/Kconfig | 21 +
drivers/staging/gmjstcm/Makefile | 3 +
drivers/staging/gmjstcm/tcm.c | 949 ++
drivers/staging/gmjstcm/tcm.h | 122 +
drivers/staging/gmjstcm/tcm_tis_spi.c | 847 ++
.../staging/rtl8188eu/os_dep/ioctl_linux.c | 6 +-
fs/btrfs/ctree.c | 2 +
include/linux/cgroup_subsys.h | 2 +
kernel/bpf/verifier.c | 20 +-
kernel/cgroup/cgroup.c | 6 +
mm/vmscan.c | 1 -
net/x25/af_x25.c | 6 +-
46 files changed, 33942 insertions(+), 32 deletions(-)
create mode 100644 drivers/net/ethernet/netswift/Kconfig
create mode 100644 drivers/net/ethernet/netswift/Makefile
create mode 100644 drivers/net/ethernet/netswift/txgbe/Kconfig
create mode 100644 drivers/net/ethernet/netswift/txgbe/Makefile
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_main.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_param.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_type.h
create mode 100644 drivers/staging/gmjstcm/Kconfig
create mode 100644 drivers/staging/gmjstcm/Makefile
create mode 100644 drivers/staging/gmjstcm/tcm.c
create mode 100644 drivers/staging/gmjstcm/tcm.h
create mode 100644 drivers/staging/gmjstcm/tcm_tis_spi.c
--
2.25.1
1
16
1
0

[PATCH kernel-4.19] iommu/vt-d:Add support for detecting ACPI device in RMRR
by LeoLiu-oc 27 Mar '21
by LeoLiu-oc 27 Mar '21
27 Mar '21
Some ACPI devices need to issue dma requests to access
the reserved memory area.BIOS uses the device scope type
ACPI_NAMESPACE_DEVICE in RMRR to report these ACPI devices.
This patch add support for detecting ACPI devices in RMRR and in
order to distinguish it from PCI device, some interface functions
are modified.
This patch was submitted to mainline kernel but not accepted by upstream
maintainer whose reason is "As I explained in the previous reply, RMRRs
were added as work around for certain legacy device and we have been
working hard to fix those legacy devices so that RMRR are no longer
needed. Any new use case of RMRR is not encouraged".
VT-D 1.3/2.5/3.0 Spec have this case's specification, We think this
Intel driver should support this case too.
https://lkml.org/lkml/2020/10/10/56
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/iommu/dmar.c | 75 +++++++++++++++++++++----------------
drivers/iommu/intel-iommu.c | 24 +++++++++++-
include/linux/dmar.h | 11 +++++-
3 files changed, 75 insertions(+), 35 deletions(-)
diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 3f0c2c1ef0cb..9a07bcad38e5 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -226,7 +226,7 @@ static bool dmar_match_pci_path(struct
dmar_pci_notify_info *info, int bus,
}
/* Return: > 0 if match found, 0 if no match found, < 0 if error
happens */
-int dmar_insert_dev_scope(struct dmar_pci_notify_info *info,
+int dmar_pci_insert_dev_scope(struct dmar_pci_notify_info *info,
void *start, void*end, u16 segment,
struct dmar_dev_scope *devices,
int devices_cnt)
@@ -315,7 +315,7 @@ static int dmar_pci_bus_add_dev(struct
dmar_pci_notify_info *info)
drhd = container_of(dmaru->hdr,
struct acpi_dmar_hardware_unit, header);
- ret = dmar_insert_dev_scope(info, (void *)(drhd + 1),
+ ret = dmar_pci_insert_dev_scope(info, (void *)(drhd + 1),
((void *)drhd) + drhd->header.length,
dmaru->segment,
dmaru->devices, dmaru->devices_cnt);
@@ -707,47 +707,58 @@ dmar_find_matched_drhd_unit(struct pci_dev *dev)
return dmaru;
}
-static void __init dmar_acpi_insert_dev_scope(u8 device_number,
- struct acpi_device *adev)
+/* Return: > 0 if match found, 0 if no match found */
+bool dmar_acpi_insert_dev_scope(u8 device_number,
+ struct acpi_device *adev,
+ void *start, void *end,
+ struct dmar_dev_scope *devices,
+ int devices_cnt)
{
- struct dmar_drhd_unit *dmaru;
- struct acpi_dmar_hardware_unit *drhd;
struct acpi_dmar_device_scope *scope;
struct device *tmp;
int i;
struct acpi_dmar_pci_path *path;
+ for (; start < end; start += scope->length) {
+ scope = start;
+ if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_NAMESPACE)
+ continue;
+ if (scope->enumeration_id != device_number)
+ continue;
+ path = (void *)(scope + 1);
+ for_each_dev_scope(devices, devices_cnt, i, tmp)
+ if (tmp == NULL) {
+ devices[i].bus = scope->bus;
+ devices[i].devfn = PCI_DEVFN(path->device, path->function);
+ rcu_assign_pointer(devices[i].dev,
+ get_device(&adev->dev));
+ return true;
+ }
+ WARN_ON(i >= devices_cnt);
+ }
+ return false;
+}
+
+static int dmar_acpi_bus_add_dev(u8 device_number, struct acpi_device
*adev)
+{
+ struct dmar_drhd_unit *dmaru;
+ struct acpi_dmar_hardware_unit *drhd;
+ int ret;
+
for_each_drhd_unit(dmaru) {
drhd = container_of(dmaru->hdr,
struct acpi_dmar_hardware_unit,
header);
- for (scope = (void *)(drhd + 1);
- (unsigned long)scope < ((unsigned long)drhd) + drhd->header.length;
- scope = ((void *)scope) + scope->length) {
- if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_NAMESPACE)
- continue;
- if (scope->enumeration_id != device_number)
- continue;
-
- path = (void *)(scope + 1);
- pr_info("ACPI device \"%s\" under DMAR at %llx as %02x:%02x.%d\n",
- dev_name(&adev->dev), dmaru->reg_base_addr,
- scope->bus, path->device, path->function);
- for_each_dev_scope(dmaru->devices, dmaru->devices_cnt, i, tmp)
- if (tmp == NULL) {
- dmaru->devices[i].bus = scope->bus;
- dmaru->devices[i].devfn = PCI_DEVFN(path->device,
- path->function);
- rcu_assign_pointer(dmaru->devices[i].dev,
- get_device(&adev->dev));
- return;
- }
- BUG_ON(i >= dmaru->devices_cnt);
- }
+ ret = dmar_acpi_insert_dev_scope(device_number, adev, (void *)(drhd+1),
+ ((void *)drhd)+drhd->header.length,
+ dmaru->devices, dmaru->devices_cnt);
+ if (ret)
+ break;
}
- pr_warn("No IOMMU scope found for ANDD enumeration ID %d (%s)\n",
- device_number, dev_name(&adev->dev));
+ if (ret > 0)
+ ret = dmar_rmrr_add_acpi_dev(device_number, adev);
+ return ret;
}
static int __init dmar_acpi_dev_scope_init(void)
@@ -776,7 +787,7 @@ static int __init dmar_acpi_dev_scope_init(void)
andd->device_name);
continue;
}
- dmar_acpi_insert_dev_scope(andd->device_number, adev);
+ dmar_acpi_bus_add_dev(andd->device_number, adev);
}
}
return 0;
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index f51ae0086786..18e0be8e05a5 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -4512,6 +4512,26 @@ int dmar_find_matched_atsr_unit(struct pci_dev *dev)
return ret;
}
+int dmar_rmrr_add_acpi_dev(u8 device_number, struct acpi_device *adev)
+{
+ int ret;
+ struct dmar_rmrr_unit *rmrru;
+ struct acpi_dmar_reserved_memory *rmrr;
+
+ list_for_each_entry(rmrru, &dmar_rmrr_units, list) {
+ rmrr = container_of(rmrru->hdr,
+ struct acpi_dmar_reserved_memory,
+ header);
+ ret = dmar_acpi_insert_dev_scope(device_number, adev, (void *)(rmrr + 1),
+ ((void *)rmrr) + rmrr->header.length,
+ rmrru->devices, rmrru->devices_cnt);
+ if (ret)
+ break;
+ }
+ pr_info("Add acpi_dev:%s to rmrru->devices\n", dev_name(&adev->dev));
+ return 0;
+}
+
int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
{
int ret = 0;
@@ -4527,7 +4547,7 @@ int dmar_iommu_notify_scope_dev(struct
dmar_pci_notify_info *info)
rmrr = container_of(rmrru->hdr,
struct acpi_dmar_reserved_memory, header);
if (info->event == BUS_NOTIFY_ADD_DEVICE) {
- ret = dmar_insert_dev_scope(info, (void *)(rmrr + 1),
+ ret = dmar_pci_insert_dev_scope(info, (void *)(rmrr + 1),
((void *)rmrr) + rmrr->header.length,
rmrr->segment, rmrru->devices,
rmrru->devices_cnt);
@@ -4545,7 +4565,7 @@ int dmar_iommu_notify_scope_dev(struct
dmar_pci_notify_info *info)
atsr = container_of(atsru->hdr, struct acpi_dmar_atsr, header);
if (info->event == BUS_NOTIFY_ADD_DEVICE) {
- ret = dmar_insert_dev_scope(info, (void *)(atsr + 1),
+ ret = dmar_pci_insert_dev_scope(info, (void *)(atsr + 1),
(void *)atsr + atsr->header.length,
atsr->segment, atsru->devices,
atsru->devices_cnt);
diff --git a/include/linux/dmar.h b/include/linux/dmar.h
index 843a41ba7e28..68de8732d8d4 100644
--- a/include/linux/dmar.h
+++ b/include/linux/dmar.h
@@ -117,10 +117,13 @@ extern int dmar_parse_dev_scope(void *start, void
*end, int *cnt,
struct dmar_dev_scope **devices, u16 segment);
extern void *dmar_alloc_dev_scope(void *start, void *end, int *cnt);
extern void dmar_free_dev_scope(struct dmar_dev_scope **devices, int
*cnt);
-extern int dmar_insert_dev_scope(struct dmar_pci_notify_info *info,
+extern int dmar_pci_insert_dev_scope(struct dmar_pci_notify_info *info,
void *start, void*end, u16 segment,
struct dmar_dev_scope *devices,
int devices_cnt);
+extern bool dmar_acpi_insert_dev_scope(u8 device_number,
+ struct acpi_device *adev, void *start, void *end,
+ struct dmar_dev_scope *devices, int devices_cnt);
extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info,
u16 segment, struct dmar_dev_scope *devices,
int count);
@@ -143,6 +146,7 @@ extern int dmar_parse_one_atsr(struct
acpi_dmar_header *header, void *arg);
extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg);
extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg);
extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
+extern int dmar_rmrr_add_acpi_dev(u8 device_number, struct acpi_device
*adev);
extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
#else /* !CONFIG_INTEL_IOMMU: */
static inline int intel_iommu_init(void) { return -ENODEV; }
@@ -152,6 +156,11 @@ static inline int intel_iommu_init(void) { return
-ENODEV; }
#define dmar_check_one_atsr dmar_res_noop
#define dmar_release_one_atsr dmar_res_noop
+static inline int dmar_rmrr_add_acpi_dev(u8 device_number, struct
acpi_device *adev)
+{
+ return 0;
+}
+
static inline int dmar_iommu_notify_scope_dev(struct
dmar_pci_notify_info *info)
{
return 0;
--
2.20.1
3
3

27 Mar '21
Add the new PCI ID 0x1d17 0x9141/0x9142/0x9144 Zhaoxin NB HDAC
support. And add some special initialization for Zhaoxin NB HDAC.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
sound/pci/hda/hda_controller.c | 17 +++++++++++-
sound/pci/hda/hda_controller.h | 2 ++
sound/pci/hda/hda_intel.c | 51 +++++++++++++++++++++++++++++++++-
3 files changed, 68 insertions(+), 2 deletions(-)
diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
index 0c5d41e5d146..0341637aa5d9 100644
--- a/sound/pci/hda/hda_controller.c
+++ b/sound/pci/hda/hda_controller.c
@@ -1116,6 +1116,16 @@ void azx_stop_chip(struct azx *chip)
}
EXPORT_SYMBOL_GPL(azx_stop_chip);
+static void azx_rirb_zxdelay(struct azx *chip, int enable)
+{
+ if (chip->remap_diu_addr) {
+ if (!enable)
+ writel(0x0, (char *)chip->remap_diu_addr + 0x490a8);
+ else
+ writel(0x1000000, (char *)chip->remap_diu_addr + 0x490a8);
+ }
+}
+
/*
* interrupt handler
*/
@@ -1175,9 +1185,14 @@ irqreturn_t azx_interrupt(int irq, void *dev_id)
azx_writeb(chip, RIRBSTS, RIRB_INT_MASK);
active = true;
if (status & RIRB_INT_RESPONSE) {
- if (chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND)
+ if ((chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) ||
+ (chip->driver_caps & AZX_DCAPS_RIRB_PRE_DELAY)) {
+ azx_rirb_zxdelay(chip, 1);
udelay(80);
+ }
snd_hdac_bus_update_rirb(bus);
+ if (chip->driver_caps & AZX_DCAPS_RIRB_PRE_DELAY)
+ azx_rirb_zxdelay(chip, 0);
}
}
} while (active && ++repeat < 10);
diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
index 63cc10604afc..16bffded0aa3 100644
--- a/sound/pci/hda/hda_controller.h
+++ b/sound/pci/hda/hda_controller.h
@@ -58,6 +58,7 @@
#define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself
after reset */
#define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */
#define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and
playback use separate stream tag */
+#define AZX_DCAPS_RIRB_PRE_DELAY (1 << 31)
enum {
AZX_SNOOP_TYPE_NONE,
@@ -167,6 +168,7 @@ struct azx {
/* GTS present */
unsigned int gts_present:1;
+ void __iomem *remap_diu_addr;
#ifdef CONFIG_SND_HDA_DSP_LOADER
struct azx_dev saved_azx_dev;
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index 67791114471c..a72852b37118 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -251,7 +251,8 @@ MODULE_SUPPORTED_DEVICE("{{Intel, ICH6},"
"{VIA, VT8237A},"
"{SiS, SIS966},"
"{ULI, M5461},"
- "{ZX, ZhaoxinHDA}}");
+ "{ZX, ZhaoxinHDA},"
+ "{ZX, ZhaoxinHDMI}}");
MODULE_DESCRIPTION("Intel HDA driver");
#if defined(CONFIG_PM) && defined(CONFIG_VGA_SWITCHEROO)
@@ -283,6 +284,7 @@ enum {
AZX_DRIVER_CTHDA,
AZX_DRIVER_CMEDIA,
AZX_DRIVER_ZHAOXIN,
+ AZX_DRIVER_ZXHDMI,
AZX_DRIVER_GENERIC,
AZX_NUM_DRIVERS, /* keep this as last entry */
};
@@ -404,6 +406,7 @@ static char *driver_short_names[] = {
[AZX_DRIVER_CTHDA] = "HDA Creative",
[AZX_DRIVER_CMEDIA] = "HDA C-Media",
[AZX_DRIVER_ZHAOXIN] = "HDA Zhaoxin",
+ [AZX_DRIVER_ZXHDMI] = "HDA Zhaoxin GFX",
[AZX_DRIVER_GENERIC] = "HD-Audio Generic",
};
@@ -480,6 +483,29 @@ static void update_pci_byte(struct pci_dev *pci,
unsigned int reg,
pci_write_config_byte(pci, reg, data);
}
+static int azx_init_pci_zx(struct azx *chip)
+{
+ struct snd_card *card = chip->card;
+ unsigned int diu_reg;
+ struct pci_dev *diu_pci = NULL;
+
+ diu_pci = pci_get_device(0x1d17, 0x3a03, NULL);
+ if (!diu_pci) {
+ dev_err(card->dev, "hda no chx001 device. \n");
+ return -ENXIO;
+ }
+ pci_read_config_dword(diu_pci, PCI_BASE_ADDRESS_0, &diu_reg);
+ chip->remap_diu_addr = ioremap_nocache(diu_reg, 0x50000);
+ dev_info(card->dev, "hda %x %p \n", diu_reg, chip->remap_diu_addr);
+ return 0;
+}
+
+static void azx_free_pci_zx(struct azx *chip)
+{
+ if (chip->remap_diu_addr)
+ iounmap(chip->remap_diu_addr);
+}
+
static void azx_init_pci(struct azx *chip)
{
int snoop_type = azx_get_snoop_type(chip);
@@ -1450,6 +1476,10 @@ static int azx_free(struct azx *chip)
hda->init_failed = 1; /* to be sure */
complete_all(&hda->probe_wait);
+ if (chip->driver_type == AZX_DRIVER_ZXHDMI) {
+ azx_free_pci_zx(chip);
+ }
+
if (use_vga_switcheroo(hda)) {
if (chip->disabled && hda->probe_continued)
snd_hda_unlock_devices(&chip->bus);
@@ -1803,6 +1833,8 @@ static int default_bdl_pos_adj(struct azx *chip)
case AZX_DRIVER_ICH:
case AZX_DRIVER_PCH:
return 1;
+ case AZX_DRIVER_ZXHDMI:
+ return 128;
default:
return 32;
}
@@ -1921,6 +1953,12 @@ static int azx_first_init(struct azx *chip)
}
#endif
+ chip->remap_diu_addr = NULL;
+
+ if (chip->driver_type == AZX_DRIVER_ZXHDMI) {
+ azx_init_pci_zx(chip);
+ }
+
err = pci_request_regions(pci, "ICH HD audio");
if (err < 0)
return err;
@@ -2030,6 +2068,7 @@ static int azx_first_init(struct azx *chip)
chip->playback_streams = ATIHDMI_NUM_PLAYBACK;
chip->capture_streams = ATIHDMI_NUM_CAPTURE;
break;
+ case AZX_DRIVER_ZXHDMI:
case AZX_DRIVER_GENERIC:
default:
chip->playback_streams = ICH6_NUM_PLAYBACK;
@@ -2773,6 +2812,11 @@ static const struct pci_device_id azx_ids[] = {
{ PCI_DEVICE(0x1106, 0x9170), .driver_data = AZX_DRIVER_GENERIC },
/* VIA GFX VT6122/VX11 */
{ PCI_DEVICE(0x1106, 0x9140), .driver_data = AZX_DRIVER_GENERIC },
+ { PCI_DEVICE(0x1106, 0x9141), .driver_data = AZX_DRIVER_GENERIC },
+ { PCI_DEVICE(0x1106, 0x9142),
+ .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB |
AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY },
+ { PCI_DEVICE(0x1106, 0x9144),
+ .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB |
AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY },
/* SIS966 */
{ PCI_DEVICE(0x1039, 0x7502), .driver_data = AZX_DRIVER_SIS },
/* ULI M5461 */
@@ -2828,6 +2872,11 @@ static const struct pci_device_id azx_ids[] = {
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI },
/* Zhaoxin */
{ PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN },
+ { PCI_DEVICE(0x1d17, 0x9141), .driver_data = AZX_DRIVER_GENERIC },
+ { PCI_DEVICE(0x1d17, 0x9142),
+ .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB |
AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY },
+ { PCI_DEVICE(0x1d17, 0x9144),
+ .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB |
AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, azx_ids);
--
2.20.1
3
3
Zhaoxin newer CPUs support MCE, CMCI and LMCE that compatible with
Intel's "Machine-Check Architecture".
To enable the supports of Linux kernel to Zhaoxin's MCA, add
specific patches for Zhaoxin's MCE, CMCI and LMCE. patches about
Zhaoxin's CMCI, LMCE use 3 functions in mce/intel.c, so make these
functions non-static.
Some Zhaoxin's CPUs have MCA bank 8, that only has one error called SVAD
(System View Address Decoder) which be controlled by IA32_MC8.CTL.0.
If enabled, the prefetch on these CPUs will cause SVAD machine check
exception when virtual machine startup and cause system panic. Add a
quirk for these Zhaoxin CPUs MCA bank 8.
LeoLiu-oc (3):
x86/mce: Add Zhaoxin MCE support
x86/mce: Add Zhaoxin CMCI support
x86/mce: Add Zhaoxin LMCE support
arch/x86/kernel/cpu/mce/core.c | 97 +++++++++++++++++++++++-------
arch/x86/kernel/cpu/mce/intel.c | 11 ++--
arch/x86/kernel/cpu/mce/internal.h | 6 ++
3 files changed, 87 insertions(+), 27 deletions(-)
--
2.20.1
2
1
mainline inclusion
from mainline-5.5
commit 6e898d2bf67a82df0aa0c955adc9278faba9a635
category: x86/mce
Add support for more Zhaoxin CPUs.
--------------------------------
All newer Zhaoxin CPUs are compatible with Intel's Machine-Check
Architecture, so add support for them.
[ bp: Reflow comment in vendor_disable_error_reporting() and massage
commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Cc: CooperYan(a)zhaoxin.com
Cc: DavidWang(a)zhaoxin.com
Cc: HerryYang(a)zhaoxin.com
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: linux-edac <linux-edac(a)vger.kernel.org>
Cc: QiyuanWang(a)zhaoxin.com
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: x86-ml <x86(a)kernel.org>
Link:
https://lkml.kernel.org/r/1568787573-1297-2-git-send-email-TonyWWang-oc@zha…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/mce/core.c | 42 ++++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 5221c49d335e..dce0fbd4cb0f 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -473,8 +473,10 @@ int mce_usable_address(struct mce *m)
if (!(m->status & MCI_STATUS_ADDRV))
return 0;
- /* Checks after this one are Intel-specific: */
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ /* Checks after this one are Intel/Zhaoxin-specific: */
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
return 1;
if (!(m->status & MCI_STATUS_MISCV))
@@ -492,10 +494,14 @@ EXPORT_SYMBOL_GPL(mce_usable_address);
bool mce_is_memory_error(struct mce *m)
{
- if (m->cpuvendor == X86_VENDOR_AMD ||
- m->cpuvendor == X86_VENDOR_HYGON) {
+ switch (m->cpuvendor) {
+ case X86_VENDOR_AMD:
+ case X86_VENDOR_HYGON:
return amd_mce_is_memory_error(m);
- } else if (m->cpuvendor == X86_VENDOR_INTEL) {
+
+ case X86_VENDOR_INTEL:
+ case X86_VENDOR_ZHAOXIN:
+ case X86_VENDOR_CENTAUR:
/*
* Intel SDM Volume 3B - 15.9.2 Compound Error Codes
*
@@ -512,9 +518,10 @@ bool mce_is_memory_error(struct mce *m)
return (m->status & 0xef80) == BIT(7) ||
(m->status & 0xef00) == BIT(8) ||
(m->status & 0xeffc) == 0xc;
- }
- return false;
+ default:
+ return false;
+ }
}
EXPORT_SYMBOL_GPL(mce_is_memory_error);
@@ -1658,6 +1665,19 @@ static int __mcheck_cpu_apply_quirks(struct
cpuinfo_x86 *c)
if (c->x86 == 6 && c->x86_model == 45)
quirk_no_way_out = quirk_sandybridge_ifu;
}
+
+ if (c->x86_vendor == X86_VENDOR_ZHAOXIN ||
+ c->x86_vendor == X86_VENDOR_CENTAUR) {
+ /*
+ * All newer Zhaoxin CPUs support MCE broadcasting. Enable
+ * synchronization with a one second timeout.
+ */
+ if (c->x86 > 6 || (c->x86_model == 0x19 || c->x86_model == 0x1f)) {
+ if (cfg->monarch_timeout < 0)
+ cfg->monarch_timeout = USEC_PER_SEC;
+ }
+ }
+
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = 0;
if (cfg->bootlog != 0)
@@ -1963,15 +1983,17 @@ static void mce_disable_error_reporting(void)
static void vendor_disable_error_reporting(void)
{
/*
- * Don't clear on Intel, AMD or Hygon CPUs. Some of these MSRs are
- * socket-wide.
+ * Don't clear on Intel, AMD, Hygon or Zhaoxin CPUs. Some of these
+ * MSRs are socket-wide.
* Disabling them for just a single offlined CPU is bad, since it will
* inhibit reporting for all shared resources on the socket like the
* last level cache (LLC), the integrated memory controller (iMC), etc.
*/
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON ||
- boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
++ boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR)
return;
mce_disable_error_reporting();
--
2.20.1
3
3

[PATCH kernel-4.19 01/11] arm64/mpam: fix a possible deadlock in mpam_enable
by Yang Yingliang 27 Mar '21
by Yang Yingliang 27 Mar '21
27 Mar '21
From: Zhang Ming <154842638(a)qq.com>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3D58V
CVE: NA
----------------------------------
No unlock operation is performed on the mpam_devices_lock
before the return statement, which may lead to a deadlock.
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Cheng Jian <cj.chengjian(a)huawei.com>
Suggested-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_device.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c
index fc7aa1ae0b825..f8840274b902f 100644
--- a/arch/arm64/kernel/mpam/mpam_device.c
+++ b/arch/arm64/kernel/mpam/mpam_device.c
@@ -560,8 +560,10 @@ static void __init mpam_enable(struct work_struct *work)
mutex_lock(&mpam_devices_lock);
mpam_enable_squash_features();
err = mpam_allocate_config();
- if (err)
+ if (err) {
+ mutex_unlock(&mpam_devices_lock);
return;
+ }
mutex_unlock(&mpam_devices_lock);
mpam_enable_irqs();
--
2.25.1
1
10

27 Mar '21
From: zhenpengzheng <zhenpengzheng(a)net-swift.com>
driver inclusion
category: feature
bugzilla: 50777
CVE: NA
-------------------------------------------------------------------------
This driver is based on drivers/net/ethernet/intel/ixgbe/.
Signed-off-by: zhenpengzheng <zhenpengzheng(a)net-swift.com>
Acked-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/netswift/Kconfig | 20 +
drivers/net/ethernet/netswift/Makefile | 6 +
drivers/net/ethernet/netswift/txgbe/Kconfig | 13 +
drivers/net/ethernet/netswift/txgbe/Makefile | 11 +
drivers/net/ethernet/netswift/txgbe/txgbe.h | 1260 +++
.../net/ethernet/netswift/txgbe/txgbe_bp.c | 875 ++
.../net/ethernet/netswift/txgbe/txgbe_bp.h | 41 +
.../net/ethernet/netswift/txgbe/txgbe_dcb.h | 30 +
.../ethernet/netswift/txgbe/txgbe_ethtool.c | 3381 +++++++
.../net/ethernet/netswift/txgbe/txgbe_hw.c | 7072 +++++++++++++++
.../net/ethernet/netswift/txgbe/txgbe_hw.h | 264 +
.../net/ethernet/netswift/txgbe/txgbe_lib.c | 959 ++
.../net/ethernet/netswift/txgbe/txgbe_main.c | 8045 +++++++++++++++++
.../net/ethernet/netswift/txgbe/txgbe_mbx.c | 399 +
.../net/ethernet/netswift/txgbe/txgbe_mbx.h | 171 +
.../net/ethernet/netswift/txgbe/txgbe_mtd.c | 1366 +++
.../net/ethernet/netswift/txgbe/txgbe_mtd.h | 1540 ++++
.../net/ethernet/netswift/txgbe/txgbe_param.c | 1191 +++
.../net/ethernet/netswift/txgbe/txgbe_phy.c | 1014 +++
.../net/ethernet/netswift/txgbe/txgbe_phy.h | 190 +
.../net/ethernet/netswift/txgbe/txgbe_ptp.c | 884 ++
.../net/ethernet/netswift/txgbe/txgbe_type.h | 3213 +++++++
24 files changed, 31947 insertions(+)
create mode 100644 drivers/net/ethernet/netswift/Kconfig
create mode 100644 drivers/net/ethernet/netswift/Makefile
create mode 100644 drivers/net/ethernet/netswift/txgbe/Kconfig
create mode 100644 drivers/net/ethernet/netswift/txgbe/Makefile
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_bp.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_main.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_param.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
create mode 100644 drivers/net/ethernet/netswift/txgbe/txgbe_type.h
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index 6fde68aa13a40..208c2cee14d6c 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -82,6 +82,7 @@ source "drivers/net/ethernet/i825xx/Kconfig"
source "drivers/net/ethernet/ibm/Kconfig"
source "drivers/net/ethernet/intel/Kconfig"
source "drivers/net/ethernet/xscale/Kconfig"
+source "drivers/net/ethernet/netswift/Kconfig"
config JME
tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index b45d5f626b592..bd2235ac6a97a 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -95,3 +95,4 @@ obj-$(CONFIG_NET_VENDOR_WIZNET) += wiznet/
obj-$(CONFIG_NET_VENDOR_XILINX) += xilinx/
obj-$(CONFIG_NET_VENDOR_XIRCOM) += xircom/
obj-$(CONFIG_NET_VENDOR_SYNOPSYS) += synopsys/
+obj-$(CONFIG_NET_VENDOR_NETSWIFT) += netswift/
diff --git a/drivers/net/ethernet/netswift/Kconfig b/drivers/net/ethernet/netswift/Kconfig
new file mode 100644
index 0000000000000..c4b510b659ae9
--- /dev/null
+++ b/drivers/net/ethernet/netswift/Kconfig
@@ -0,0 +1,20 @@
+#
+# Netswift network device configuration
+#
+
+config NET_VENDOR_NETSWIFT
+ bool "netswift devices"
+ default y
+ ---help---
+ If you have a network (Ethernet) card belonging to this class, say Y.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Netswift NICs. If you say Y, you will be asked for
+ your specific card in the following questions.
+
+if NET_VENDOR_NETSWIFT
+
+source "drivers/net/ethernet/netswift/txgbe/Kconfig"
+
+endif # NET_VENDOR_NETSWIFT
diff --git a/drivers/net/ethernet/netswift/Makefile b/drivers/net/ethernet/netswift/Makefile
new file mode 100644
index 0000000000000..0845d08600bee
--- /dev/null
+++ b/drivers/net/ethernet/netswift/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Netswift network device drivers.
+#
+
+obj-$(CONFIG_TXGBE) += txgbe/
diff --git a/drivers/net/ethernet/netswift/txgbe/Kconfig b/drivers/net/ethernet/netswift/txgbe/Kconfig
new file mode 100644
index 0000000000000..5aba1985d83f8
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/Kconfig
@@ -0,0 +1,13 @@
+#
+# Netswift driver configuration
+#
+
+config TXGBE
+ tristate "Netswift 10G Network Interface Card"
+ default n
+ depends on PCI_MSI && NUMA && PCI_IOV && DCB
+ ---help---
+ This driver supports Netswift 10G Ethernet cards.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
diff --git a/drivers/net/ethernet/netswift/txgbe/Makefile b/drivers/net/ethernet/netswift/txgbe/Makefile
new file mode 100644
index 0000000000000..f8531f3356a85
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+#
+# Makefile for the Netswift 10GbE PCI Express ethernet driver
+#
+
+obj-$(CONFIG_TXGBE) += txgbe.o
+
+txgbe-objs := txgbe_main.o txgbe_ethtool.o \
+ txgbe_hw.o txgbe_phy.o txgbe_bp.o \
+ txgbe_mbx.o txgbe_mtd.o txgbe_param.o txgbe_lib.o txgbe_ptp.o
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe.h b/drivers/net/ethernet/netswift/txgbe/txgbe.h
new file mode 100644
index 0000000000000..40bb86dbf3aef
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe.h
@@ -0,0 +1,1260 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+
+#ifndef _TXGBE_H_
+#define _TXGBE_H_
+
+#include <net/ip.h>
+#include <linux/pci.h>
+#include <linux/vmalloc.h>
+#include <linux/ethtool.h>
+#include <linux/if_vlan.h>
+#include <net/busy_poll.h>
+#include <linux/sctp.h>
+
+#include <linux/timecounter.h>
+#include <linux/clocksource.h>
+#include <linux/net_tstamp.h>
+#include <linux/ptp_clock_kernel.h>
+#include <linux/aer.h>
+
+#include "txgbe_type.h"
+
+#ifndef KR_POLLING
+#define KR_POLLING 0
+#endif
+
+#ifndef KR_MODE
+#define KR_MODE 0
+#endif
+
+#ifndef AUTO
+#define AUTO 1
+#endif
+
+#ifndef DEFAULT_FCPAUSE
+#define DEFAULT_FCPAUSE 0xFFFF /* kylinft/kylinlx : 0x3FFF default to 0xFFFF*/
+#endif
+
+#ifndef MAX_REQUEST_SIZE
+#define MAX_REQUEST_SIZE 256 /* kylinft : 512 default to 256*/
+#endif
+
+#ifndef DEFAULT_TXD
+#define DEFAULT_TXD 512 /*deepinsw : 1024 default to 512*/
+#endif
+
+#ifndef DEFAULT_TX_WORK
+#define DEFAULT_TX_WORK 256 /*deepinsw : 512 default to 256*/
+#endif
+
+#ifndef CL72_KRTR_PRBS_MODE_EN
+#define CL72_KRTR_PRBS_MODE_EN 0x2fff /*deepinsw : 512 default to 256*/
+#endif
+
+#ifndef SFI_SET
+#define SFI_SET 0
+#define SFI_MAIN 24
+#define SFI_PRE 4
+#define SFI_POST 16
+#endif
+
+#ifndef KR_SET
+#define KR_SET 0
+#define KR_MAIN 27
+#define KR_PRE 8
+#define KR_POST 44
+#endif
+
+#ifndef KX4_SET
+#define KX4_SET 0
+#define KX4_MAIN 40
+#define KX4_PRE 0
+#define KX4_POST 0
+#endif
+
+#ifndef KX_SET
+#define KX_SET 0
+#define KX_MAIN 24
+#define KX_PRE 4
+#define KX_POST 16
+#endif
+
+
+#ifndef KX4_TXRX_PIN
+#define KX4_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */
+#endif
+#ifndef KR_TXRX_PIN
+#define KR_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */
+#endif
+#ifndef SFI_TXRX_PIN
+#define SFI_TXRX_PIN 0 /*rx : 0xf tx : 0xf0 */
+#endif
+
+#ifndef KX_SGMII
+#define KX_SGMII 0 /* 1 0x18090 :0xcf00 */
+#endif
+
+#ifndef KR_NORESET
+#define KR_NORESET 0
+#endif
+
+#ifndef KR_CL72_TRAINING
+#define KR_CL72_TRAINING 1
+#endif
+
+#ifndef KR_REINITED
+#define KR_REINITED 1
+#endif
+
+#ifndef KR_AN73_PRESET
+#define KR_AN73_PRESET 1
+#endif
+
+#ifndef BOND_CHECK_LINK_MODE
+#define BOND_CHECK_LINK_MODE 0
+#endif
+
+/* Ether Types */
+#define TXGBE_ETH_P_LLDP 0x88CC
+#define TXGBE_ETH_P_CNM 0x22E7
+
+/* TX/RX descriptor defines */
+#if defined(DEFAULT_TXD) || defined(DEFAULT_TX_WORK)
+#define TXGBE_DEFAULT_TXD DEFAULT_TXD
+#define TXGBE_DEFAULT_TX_WORK DEFAULT_TX_WORK
+#else
+#define TXGBE_DEFAULT_TXD 512
+#define TXGBE_DEFAULT_TX_WORK 256
+#endif
+#define TXGBE_MAX_TXD 8192
+#define TXGBE_MIN_TXD 128
+
+#if (PAGE_SIZE < 8192)
+#define TXGBE_DEFAULT_RXD 512
+#define TXGBE_DEFAULT_RX_WORK 256
+#else
+#define TXGBE_DEFAULT_RXD 256
+#define TXGBE_DEFAULT_RX_WORK 128
+#endif
+
+#define TXGBE_MAX_RXD 8192
+#define TXGBE_MIN_RXD 128
+
+#define TXGBE_ETH_P_LLDP 0x88CC
+
+/* flow control */
+#define TXGBE_MIN_FCRTL 0x40
+#define TXGBE_MAX_FCRTL 0x7FF80
+#define TXGBE_MIN_FCRTH 0x600
+#define TXGBE_MAX_FCRTH 0x7FFF0
+#if defined(DEFAULT_FCPAUSE)
+#define TXGBE_DEFAULT_FCPAUSE DEFAULT_FCPAUSE /*0x3800*/
+#else
+#define TXGBE_DEFAULT_FCPAUSE 0xFFFF
+#endif
+#define TXGBE_MIN_FCPAUSE 0
+#define TXGBE_MAX_FCPAUSE 0xFFFF
+
+/* Supported Rx Buffer Sizes */
+#define TXGBE_RXBUFFER_256 256 /* Used for skb receive header */
+#define TXGBE_RXBUFFER_2K 2048
+#define TXGBE_RXBUFFER_3K 3072
+#define TXGBE_RXBUFFER_4K 4096
+#define TXGBE_MAX_RXBUFFER 16384 /* largest size for single descriptor */
+
+#define TXGBE_BP_M_NULL 0
+#define TXGBE_BP_M_SFI 1
+#define TXGBE_BP_M_KR 2
+#define TXGBE_BP_M_KX4 3
+#define TXGBE_BP_M_KX 4
+#define TXGBE_BP_M_NAUTO 0
+#define TXGBE_BP_M_AUTO 1
+
+/*
+ * NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we
+ * reserve 64 more, and skb_shared_info adds an additional 320 bytes more,
+ * this adds up to 448 bytes of extra data.
+ *
+ * Since netdev_alloc_skb now allocates a page fragment we can use a value
+ * of 256 and the resultant skb will have a truesize of 960 or less.
+ */
+#define TXGBE_RX_HDR_SIZE TXGBE_RXBUFFER_256
+
+#define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
+
+/* How many Rx Buffers do we bundle into one write to the hardware ? */
+#define TXGBE_RX_BUFFER_WRITE 16 /* Must be power of 2 */
+#define TXGBE_RX_DMA_ATTR \
+ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+
+/* assume the kernel supports 8021p to avoid stripping vlan tags */
+#ifndef HAVE_8021P_SUPPORT
+#define HAVE_8021P_SUPPORT
+#endif
+
+enum txgbe_tx_flags {
+ /* cmd_type flags */
+ TXGBE_TX_FLAGS_HW_VLAN = 0x01,
+ TXGBE_TX_FLAGS_TSO = 0x02,
+ TXGBE_TX_FLAGS_TSTAMP = 0x04,
+
+ /* olinfo flags */
+ TXGBE_TX_FLAGS_CC = 0x08,
+ TXGBE_TX_FLAGS_IPV4 = 0x10,
+ TXGBE_TX_FLAGS_CSUM = 0x20,
+ TXGBE_TX_FLAGS_OUTER_IPV4 = 0x100,
+ TXGBE_TX_FLAGS_LINKSEC = 0x200,
+ TXGBE_TX_FLAGS_IPSEC = 0x400,
+
+ /* software defined flags */
+ TXGBE_TX_FLAGS_SW_VLAN = 0x40,
+ TXGBE_TX_FLAGS_FCOE = 0x80,
+};
+
+/* VLAN info */
+#define TXGBE_TX_FLAGS_VLAN_MASK 0xffff0000
+#define TXGBE_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000
+#define TXGBE_TX_FLAGS_VLAN_PRIO_SHIFT 29
+#define TXGBE_TX_FLAGS_VLAN_SHIFT 16
+
+#define TXGBE_MAX_RX_DESC_POLL 10
+
+#define TXGBE_MAX_VF_MC_ENTRIES 30
+#define TXGBE_MAX_VF_FUNCTIONS 64
+#define MAX_EMULATION_MAC_ADDRS 16
+#define TXGBE_MAX_PF_MACVLANS 15
+#define TXGBE_VF_DEVICE_ID 0x1000
+
+/* must account for pools assigned to VFs. */
+#define VMDQ_P(p) (p)
+
+
+#define UPDATE_VF_COUNTER_32bit(reg, last_counter, counter) \
+ { \
+ u32 current_counter = rd32(hw, reg); \
+ if (current_counter < last_counter) \
+ counter += 0x100000000LL; \
+ last_counter = current_counter; \
+ counter &= 0xFFFFFFFF00000000LL; \
+ counter |= current_counter; \
+ }
+
+#define UPDATE_VF_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \
+ { \
+ u64 current_counter_lsb = rd32(hw, reg_lsb); \
+ u64 current_counter_msb = rd32(hw, reg_msb); \
+ u64 current_counter = (current_counter_msb << 32) | \
+ current_counter_lsb; \
+ if (current_counter < last_counter) \
+ counter += 0x1000000000LL; \
+ last_counter = current_counter; \
+ counter &= 0xFFFFFFF000000000LL; \
+ counter |= current_counter; \
+ }
+
+struct vf_stats {
+ u64 gprc;
+ u64 gorc;
+ u64 gptc;
+ u64 gotc;
+ u64 mprc;
+};
+
+struct vf_data_storage {
+ struct pci_dev *vfdev;
+ u8 __iomem *b4_addr;
+ u32 b4_buf[16];
+ unsigned char vf_mac_addresses[ETH_ALEN];
+ u16 vf_mc_hashes[TXGBE_MAX_VF_MC_ENTRIES];
+ u16 num_vf_mc_hashes;
+ u16 default_vf_vlan_id;
+ u16 vlans_enabled;
+ bool clear_to_send;
+ struct vf_stats vfstats;
+ struct vf_stats last_vfstats;
+ struct vf_stats saved_rst_vfstats;
+ bool pf_set_mac;
+ u16 pf_vlan; /* When set, guest VLAN config not allowed. */
+ u16 pf_qos;
+ u16 min_tx_rate;
+ u16 max_tx_rate;
+ u16 vlan_count;
+ u8 spoofchk_enabled;
+ u8 trusted;
+ int xcast_mode;
+ unsigned int vf_api;
+};
+
+struct vf_macvlans {
+ struct list_head l;
+ int vf;
+ bool free;
+ bool is_macvlan;
+ u8 vf_macvlan[ETH_ALEN];
+};
+
+#define TXGBE_MAX_TXD_PWR 14
+#define TXGBE_MAX_DATA_PER_TXD (1 << TXGBE_MAX_TXD_PWR)
+
+/* Tx Descriptors needed, worst case */
+#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), TXGBE_MAX_DATA_PER_TXD)
+#ifndef MAX_SKB_FRAGS
+#define DESC_NEEDED 4
+#elif (MAX_SKB_FRAGS < 16)
+#define DESC_NEEDED ((MAX_SKB_FRAGS * TXD_USE_COUNT(PAGE_SIZE)) + 4)
+#else
+#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
+#endif
+
+/* wrapper around a pointer to a socket buffer,
+ * so a DMA handle can be stored along with the buffer */
+struct txgbe_tx_buffer {
+ union txgbe_tx_desc *next_to_watch;
+ unsigned long time_stamp;
+ struct sk_buff *skb;
+ unsigned int bytecount;
+ unsigned short gso_segs;
+ __be16 protocol;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+ u32 tx_flags;
+};
+
+struct txgbe_rx_buffer {
+ struct sk_buff *skb;
+ dma_addr_t dma;
+ dma_addr_t page_dma;
+ struct page *page;
+ unsigned int page_offset;
+};
+
+struct txgbe_queue_stats {
+ u64 packets;
+ u64 bytes;
+#ifdef BP_EXTENDED_STATS
+ u64 yields;
+ u64 misses;
+ u64 cleaned;
+#endif /* BP_EXTENDED_STATS */
+};
+
+struct txgbe_tx_queue_stats {
+ u64 restart_queue;
+ u64 tx_busy;
+ u64 tx_done_old;
+};
+
+struct txgbe_rx_queue_stats {
+ u64 rsc_count;
+ u64 rsc_flush;
+ u64 non_eop_descs;
+ u64 alloc_rx_page_failed;
+ u64 alloc_rx_buff_failed;
+ u64 csum_good_cnt;
+ u64 csum_err;
+};
+
+#define TXGBE_TS_HDR_LEN 8
+enum txgbe_ring_state_t {
+ __TXGBE_RX_3K_BUFFER,
+ __TXGBE_RX_BUILD_SKB_ENABLED,
+ __TXGBE_TX_FDIR_INIT_DONE,
+ __TXGBE_TX_XPS_INIT_DONE,
+ __TXGBE_TX_DETECT_HANG,
+ __TXGBE_HANG_CHECK_ARMED,
+ __TXGBE_RX_HS_ENABLED,
+ __TXGBE_RX_RSC_ENABLED,
+};
+
+struct txgbe_fwd_adapter {
+ unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+ struct net_device *vdev;
+ struct txgbe_adapter *adapter;
+ unsigned int tx_base_queue;
+ unsigned int rx_base_queue;
+ int index; /* pool index on PF */
+};
+
+#define ring_uses_build_skb(ring) \
+ test_bit(__TXGBE_RX_BUILD_SKB_ENABLED, &(ring)->state)
+
+#define ring_is_hs_enabled(ring) \
+ test_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state)
+#define set_ring_hs_enabled(ring) \
+ set_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state)
+#define clear_ring_hs_enabled(ring) \
+ clear_bit(__TXGBE_RX_HS_ENABLED, &(ring)->state)
+#define check_for_tx_hang(ring) \
+ test_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define set_check_for_tx_hang(ring) \
+ set_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define clear_check_for_tx_hang(ring) \
+ clear_bit(__TXGBE_TX_DETECT_HANG, &(ring)->state)
+#define ring_is_rsc_enabled(ring) \
+ test_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+#define set_ring_rsc_enabled(ring) \
+ set_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+#define clear_ring_rsc_enabled(ring) \
+ clear_bit(__TXGBE_RX_RSC_ENABLED, &(ring)->state)
+
+struct txgbe_ring {
+ struct txgbe_ring *next; /* pointer to next ring in q_vector */
+ struct txgbe_q_vector *q_vector; /* backpointer to host q_vector */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct device *dev; /* device for DMA mapping */
+ struct txgbe_fwd_adapter *accel;
+ void *desc; /* descriptor ring memory */
+ union {
+ struct txgbe_tx_buffer *tx_buffer_info;
+ struct txgbe_rx_buffer *rx_buffer_info;
+ };
+ unsigned long state;
+ u8 __iomem *tail;
+ dma_addr_t dma; /* phys. address of descriptor ring */
+ unsigned int size; /* length in bytes */
+
+ u16 count; /* amount of descriptors */
+
+ u8 queue_index; /* needed for multiqueue queue management */
+ u8 reg_idx; /* holds the special value that gets
+ * the hardware register offset
+ * associated with this ring, which is
+ * different for DCB and RSS modes
+ */
+ u16 next_to_use;
+ u16 next_to_clean;
+ unsigned long last_rx_timestamp;
+ u16 rx_buf_len;
+ union {
+ u16 next_to_alloc;
+ struct {
+ u8 atr_sample_rate;
+ u8 atr_count;
+ };
+ };
+
+ u8 dcb_tc;
+ struct txgbe_queue_stats stats;
+ struct u64_stats_sync syncp;
+
+ union {
+ struct txgbe_tx_queue_stats tx_stats;
+ struct txgbe_rx_queue_stats rx_stats;
+ };
+} ____cacheline_internodealigned_in_smp;
+
+enum txgbe_ring_f_enum {
+ RING_F_NONE = 0,
+ RING_F_VMDQ, /* SR-IOV uses the same ring feature */
+ RING_F_RSS,
+ RING_F_FDIR,
+ RING_F_ARRAY_SIZE /* must be last in enum set */
+};
+
+#define TXGBE_MAX_DCB_INDICES 8
+#define TXGBE_MAX_RSS_INDICES 63
+#define TXGBE_MAX_VMDQ_INDICES 64
+#define TXGBE_MAX_FDIR_INDICES 63
+
+#define MAX_RX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
+#define MAX_TX_QUEUES (TXGBE_MAX_FDIR_INDICES + 1)
+
+#define TXGBE_MAX_L2A_QUEUES 4
+#define TXGBE_BAD_L2A_QUEUE 3
+
+#define TXGBE_MAX_MACVLANS 32
+#define TXGBE_MAX_DCBMACVLANS 8
+
+struct txgbe_ring_feature {
+ u16 limit; /* upper limit on feature indices */
+ u16 indices; /* current value of indices */
+ u16 mask; /* Mask used for feature to ring mapping */
+ u16 offset; /* offset to start of feature */
+};
+
+#define TXGBE_VMDQ_8Q_MASK 0x78
+#define TXGBE_VMDQ_4Q_MASK 0x7C
+#define TXGBE_VMDQ_2Q_MASK 0x7E
+
+/*
+ * FCoE requires that all Rx buffers be over 2200 bytes in length. Since
+ * this is twice the size of a half page we need to double the page order
+ * for FCoE enabled Rx queues.
+ */
+static inline unsigned int txgbe_rx_bufsz(struct txgbe_ring __maybe_unused *ring)
+{
+#if MAX_SKB_FRAGS < 8
+ return ALIGN(TXGBE_MAX_RXBUFFER / MAX_SKB_FRAGS, 1024);
+#else
+ return TXGBE_RXBUFFER_2K;
+#endif
+}
+
+static inline unsigned int txgbe_rx_pg_order(struct txgbe_ring __maybe_unused *ring)
+{
+ return 0;
+}
+#define txgbe_rx_pg_size(_ring) (PAGE_SIZE << txgbe_rx_pg_order(_ring))
+
+struct txgbe_ring_container {
+ struct txgbe_ring *ring; /* pointer to linked list of rings */
+ unsigned int total_bytes; /* total bytes processed this int */
+ unsigned int total_packets; /* total packets processed this int */
+ u16 work_limit; /* total work allowed per interrupt */
+ u8 count; /* total number of rings in vector */
+ u8 itr; /* current ITR setting for ring */
+};
+
+/* iterator for handling rings in ring container */
+#define txgbe_for_each_ring(pos, head) \
+ for (pos = (head).ring; pos != NULL; pos = pos->next)
+
+#define MAX_RX_PACKET_BUFFERS ((adapter->flags & TXGBE_FLAG_DCB_ENABLED) \
+ ? 8 : 1)
+#define MAX_TX_PACKET_BUFFERS MAX_RX_PACKET_BUFFERS
+
+/* MAX_MSIX_Q_VECTORS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+struct txgbe_q_vector {
+ struct txgbe_adapter *adapter;
+ int cpu; /* CPU for DCA */
+ u16 v_idx; /* index of q_vector within array, also used for
+ * finding the bit in EICR and friends that
+ * represents the vector for this ring */
+ u16 itr; /* Interrupt throttle rate written to EITR */
+ struct txgbe_ring_container rx, tx;
+
+ struct napi_struct napi;
+ cpumask_t affinity_mask;
+ int numa_node;
+ struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[IFNAMSIZ + 17];
+ bool netpoll_rx;
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct txgbe_ring ring[0] ____cacheline_internodealigned_in_smp;
+};
+
+/*
+ * microsecond values for various ITR rates shifted by 2 to fit itr register
+ * with the first 3 bits reserved 0
+ */
+#define TXGBE_MIN_RSC_ITR 24
+#define TXGBE_100K_ITR 40
+#define TXGBE_20K_ITR 200
+#define TXGBE_16K_ITR 248
+#define TXGBE_12K_ITR 336
+
+/* txgbe_test_staterr - tests bits in Rx descriptor status and error fields */
+static inline __le32 txgbe_test_staterr(union txgbe_rx_desc *rx_desc,
+ const u32 stat_err_bits)
+{
+ return rx_desc->wb.upper.status_error & cpu_to_le32(stat_err_bits);
+}
+
+/* txgbe_desc_unused - calculate if we have unused descriptors */
+static inline u16 txgbe_desc_unused(struct txgbe_ring *ring)
+{
+ u16 ntc = ring->next_to_clean;
+ u16 ntu = ring->next_to_use;
+
+ return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;
+}
+
+#define TXGBE_RX_DESC(R, i) \
+ (&(((union txgbe_rx_desc *)((R)->desc))[i]))
+#define TXGBE_TX_DESC(R, i) \
+ (&(((union txgbe_tx_desc *)((R)->desc))[i]))
+#define TXGBE_TX_CTXTDESC(R, i) \
+ (&(((struct txgbe_tx_context_desc *)((R)->desc))[i]))
+
+#define TXGBE_MAX_JUMBO_FRAME_SIZE 9432 /* max payload 9414 */
+
+#define TCP_TIMER_VECTOR 0
+#define OTHER_VECTOR 1
+#define NON_Q_VECTORS (OTHER_VECTOR + TCP_TIMER_VECTOR)
+
+#define TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE 64
+
+struct txgbe_mac_addr {
+ u8 addr[ETH_ALEN];
+ u16 state; /* bitmask */
+ u64 pools;
+};
+
+#define TXGBE_MAC_STATE_DEFAULT 0x1
+#define TXGBE_MAC_STATE_MODIFIED 0x2
+#define TXGBE_MAC_STATE_IN_USE 0x4
+
+/*
+ * Only for array allocations in our adapter struct.
+ * we can actually assign 64 queue vectors based on our extended-extended
+ * interrupt registers.
+ */
+#define MAX_MSIX_Q_VECTORS TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE
+#define MAX_MSIX_COUNT TXGBE_MAX_MSIX_VECTORS_SAPPHIRE
+
+#define MIN_MSIX_Q_VECTORS 1
+#define MIN_MSIX_COUNT (MIN_MSIX_Q_VECTORS + NON_Q_VECTORS)
+
+/* default to trying for four seconds */
+#define TXGBE_TRY_LINK_TIMEOUT (4 * HZ)
+#define TXGBE_SFP_POLL_JIFFIES (2 * HZ) /* SFP poll every 2 seconds */
+
+/**
+ * txgbe_adapter.flag
+ **/
+#define TXGBE_FLAG_MSI_CAPABLE (u32)(1 << 0)
+#define TXGBE_FLAG_MSI_ENABLED (u32)(1 << 1)
+#define TXGBE_FLAG_MSIX_CAPABLE (u32)(1 << 2)
+#define TXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3)
+#define TXGBE_FLAG_LLI_PUSH (u32)(1 << 4)
+
+#define TXGBE_FLAG_TPH_ENABLED (u32)(1 << 6)
+#define TXGBE_FLAG_TPH_CAPABLE (u32)(1 << 7)
+#define TXGBE_FLAG_TPH_ENABLED_DATA (u32)(1 << 8)
+
+#define TXGBE_FLAG_MQ_CAPABLE (u32)(1 << 9)
+#define TXGBE_FLAG_DCB_ENABLED (u32)(1 << 10)
+#define TXGBE_FLAG_VMDQ_ENABLED (u32)(1 << 11)
+#define TXGBE_FLAG_FAN_FAIL_CAPABLE (u32)(1 << 12)
+#define TXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 13)
+#define TXGBE_FLAG_NEED_LINK_CONFIG (u32)(1 << 14)
+#define TXGBE_FLAG_FDIR_HASH_CAPABLE (u32)(1 << 15)
+#define TXGBE_FLAG_FDIR_PERFECT_CAPABLE (u32)(1 << 16)
+#define TXGBE_FLAG_SRIOV_CAPABLE (u32)(1 << 19)
+#define TXGBE_FLAG_SRIOV_ENABLED (u32)(1 << 20)
+#define TXGBE_FLAG_SRIOV_REPLICATION_ENABLE (u32)(1 << 21)
+#define TXGBE_FLAG_SRIOV_L2SWITCH_ENABLE (u32)(1 << 22)
+#define TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE (u32)(1 << 23)
+#define TXGBE_FLAG_RX_HWTSTAMP_ENABLED (u32)(1 << 24)
+#define TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE (u32)(1 << 25)
+#define TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE (u32)(1 << 26)
+#define TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER (u32)(1 << 27)
+#define TXGBE_FLAG_NEED_ETH_PHY_RESET (u32)(1 << 28)
+#define TXGBE_FLAG_RX_HS_ENABLED (u32)(1 << 30)
+#define TXGBE_FLAG_LINKSEC_ENABLED (u32)(1 << 31)
+#define TXGBE_FLAG_IPSEC_ENABLED (u32)(1 << 5)
+
+/* preset defaults */
+#define TXGBE_FLAGS_SP_INIT (TXGBE_FLAG_MSI_CAPABLE \
+ | TXGBE_FLAG_MSIX_CAPABLE \
+ | TXGBE_FLAG_MQ_CAPABLE \
+ | TXGBE_FLAG_SRIOV_CAPABLE)
+
+/**
+ * txgbe_adapter.flag2
+ **/
+#define TXGBE_FLAG2_RSC_CAPABLE (1U << 0)
+#define TXGBE_FLAG2_RSC_ENABLED (1U << 1)
+#define TXGBE_FLAG2_TEMP_SENSOR_CAPABLE (1U << 3)
+#define TXGBE_FLAG2_TEMP_SENSOR_EVENT (1U << 4)
+#define TXGBE_FLAG2_SEARCH_FOR_SFP (1U << 5)
+#define TXGBE_FLAG2_SFP_NEEDS_RESET (1U << 6)
+#define TXGBE_FLAG2_PF_RESET_REQUESTED (1U << 7)
+#define TXGBE_FLAG2_FDIR_REQUIRES_REINIT (1U << 8)
+#define TXGBE_FLAG2_RSS_FIELD_IPV4_UDP (1U << 9)
+#define TXGBE_FLAG2_RSS_FIELD_IPV6_UDP (1U << 10)
+#define TXGBE_FLAG2_RSS_ENABLED (1U << 12)
+#define TXGBE_FLAG2_PTP_PPS_ENABLED (1U << 11)
+#define TXGBE_FLAG2_EEE_CAPABLE (1U << 14)
+#define TXGBE_FLAG2_EEE_ENABLED (1U << 15)
+#define TXGBE_FLAG2_VXLAN_REREG_NEEDED (1U << 16)
+#define TXGBE_FLAG2_DEV_RESET_REQUESTED (1U << 18)
+#define TXGBE_FLAG2_RESET_INTR_RECEIVED (1U << 19)
+#define TXGBE_FLAG2_GLOBAL_RESET_REQUESTED (1U << 20)
+#define TXGBE_FLAG2_CLOUD_SWITCH_ENABLED (1U << 21)
+#define TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED (1U << 22)
+#define KR (1U << 23)
+#define TXGBE_FLAG2_KR_TRAINING (1U << 24)
+#define TXGBE_FLAG2_KR_AUTO (1U << 25)
+#define TXGBE_FLAG2_LINK_DOWN (1U << 26)
+#define TXGBE_FLAG2_KR_PRO_DOWN (1U << 27)
+#define TXGBE_FLAG2_KR_PRO_REINIT (1U << 28)
+#define TXGBE_FLAG2_PCIE_NEED_RECOVER (1U << 31)
+
+
+#define TXGBE_SET_FLAG(_input, _flag, _result) \
+ ((_flag <= _result) ? \
+ ((u32)(_input & _flag) * (_result / _flag)) : \
+ ((u32)(_input & _flag) / (_flag / _result)))
+
+enum txgbe_isb_idx {
+ TXGBE_ISB_HEADER,
+ TXGBE_ISB_MISC,
+ TXGBE_ISB_VEC0,
+ TXGBE_ISB_VEC1,
+ TXGBE_ISB_MAX
+};
+
+/* board specific private data structure */
+struct txgbe_adapter {
+ unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+ /* OS defined structs */
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+
+ unsigned long state;
+
+ /* Some features need tri-state capability,
+ * thus the additional *_CAPABLE flags.
+ */
+ u32 flags;
+ u32 flags2;
+ u32 vf_mode;
+ u32 backplane_an;
+ u32 an73;
+ u32 an37;
+ u32 ffe_main;
+ u32 ffe_pre;
+ u32 ffe_post;
+ u32 ffe_set;
+ u32 backplane_mode;
+ u32 backplane_auto;
+
+ bool cloud_mode;
+
+ /* Tx fast path data */
+ int num_tx_queues;
+ u16 tx_itr_setting;
+ u16 tx_work_limit;
+
+ /* Rx fast path data */
+ int num_rx_queues;
+ u16 rx_itr_setting;
+ u16 rx_work_limit;
+
+ unsigned int num_vmdqs; /* does not include pools assigned to VFs */
+ unsigned int queues_per_pool;
+
+ /* TX */
+ struct txgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
+
+ u64 restart_queue;
+ u64 lsc_int;
+ u32 tx_timeout_count;
+
+ /* RX */
+ struct txgbe_ring *rx_ring[MAX_RX_QUEUES];
+ u64 hw_csum_rx_error;
+ u64 hw_csum_rx_good;
+ u64 hw_rx_no_dma_resources;
+ u64 rsc_total_count;
+ u64 rsc_total_flush;
+ u64 non_eop_descs;
+ u32 alloc_rx_page_failed;
+ u32 alloc_rx_buff_failed;
+
+ struct txgbe_q_vector *q_vector[MAX_MSIX_Q_VECTORS];
+
+ u8 dcb_set_bitmap;
+ u8 dcbx_cap;
+ enum txgbe_fc_mode last_lfc_mode;
+
+ int num_q_vectors; /* current number of q_vectors for device */
+ int max_q_vectors; /* upper limit of q_vectors for device */
+ struct txgbe_ring_feature ring_feature[RING_F_ARRAY_SIZE];
+ struct msix_entry *msix_entries;
+
+ u64 test_icr;
+ struct txgbe_ring test_tx_ring;
+ struct txgbe_ring test_rx_ring;
+
+ /* structs defined in txgbe_hw.h */
+ struct txgbe_hw hw;
+ u16 msg_enable;
+ struct txgbe_hw_stats stats;
+ u32 lli_port;
+ u32 lli_size;
+ u32 lli_etype;
+ u32 lli_vlan_pri;
+
+ u32 *config_space;
+ u64 tx_busy;
+ unsigned int tx_ring_count;
+ unsigned int rx_ring_count;
+
+ u32 link_speed;
+ bool link_up;
+ unsigned long sfp_poll_time;
+ unsigned long link_check_timeout;
+
+ struct timer_list service_timer;
+ struct work_struct service_task;
+ struct hlist_head fdir_filter_list;
+ unsigned long fdir_overflow; /* number of times ATR was backed off */
+ union txgbe_atr_input fdir_mask;
+ int fdir_filter_count;
+ u32 fdir_pballoc;
+ u32 atr_sample_rate;
+ spinlock_t fdir_perfect_lock;
+
+ u8 __iomem *io_addr; /* Mainly for iounmap use */
+ u32 wol;
+
+ u16 bd_number;
+ u16 bridge_mode;
+
+ char eeprom_id[32];
+ u16 eeprom_cap;
+ bool netdev_registered;
+ u32 interrupt_event;
+ u32 led_reg;
+
+ struct ptp_clock *ptp_clock;
+ struct ptp_clock_info ptp_caps;
+ struct work_struct ptp_tx_work;
+ struct sk_buff *ptp_tx_skb;
+ struct hwtstamp_config tstamp_config;
+ unsigned long ptp_tx_start;
+ unsigned long last_overflow_check;
+ unsigned long last_rx_ptp_check;
+ spinlock_t tmreg_lock;
+ struct cyclecounter hw_cc;
+ struct timecounter hw_tc;
+ u32 base_incval;
+ u32 tx_hwtstamp_timeouts;
+ u32 tx_hwtstamp_skipped;
+ u32 rx_hwtstamp_cleared;
+ void (*ptp_setup_sdp) (struct txgbe_adapter *);
+
+ DECLARE_BITMAP(active_vfs, TXGBE_MAX_VF_FUNCTIONS);
+ unsigned int num_vfs;
+ struct vf_data_storage *vfinfo;
+ struct vf_macvlans vf_mvs;
+ struct vf_macvlans *mv_list;
+ struct txgbe_mac_addr *mac_table;
+
+ __le16 vxlan_port;
+ __le16 geneve_port;
+
+ u8 default_up;
+
+ unsigned long fwd_bitmask; /* bitmask indicating in use pools */
+ unsigned long tx_timeout_last_recovery;
+ u32 tx_timeout_recovery_level;
+
+#define TXGBE_MAX_RETA_ENTRIES 128
+ u8 rss_indir_tbl[TXGBE_MAX_RETA_ENTRIES];
+#define TXGBE_RSS_KEY_SIZE 40
+ u32 rss_key[TXGBE_RSS_KEY_SIZE / sizeof(u32)];
+
+ void *ipsec;
+
+ /* misc interrupt status block */
+ dma_addr_t isb_dma;
+ u32 *isb_mem;
+ u32 isb_tag[TXGBE_ISB_MAX];
+};
+
+static inline u32 txgbe_misc_isb(struct txgbe_adapter *adapter,
+ enum txgbe_isb_idx idx)
+{
+ u32 cur_tag = 0;
+ u32 cur_diff = 0;
+
+ cur_tag = adapter->isb_mem[TXGBE_ISB_HEADER];
+ cur_diff = cur_tag - adapter->isb_tag[idx];
+
+ adapter->isb_tag[idx] = cur_tag;
+
+ return adapter->isb_mem[idx];
+}
+
+static inline u8 txgbe_max_rss_indices(struct txgbe_adapter *adapter)
+{
+ return TXGBE_MAX_RSS_INDICES;
+}
+
+struct txgbe_fdir_filter {
+ struct hlist_node fdir_node;
+ union txgbe_atr_input filter;
+ u16 sw_idx;
+ u16 action;
+};
+
+enum txgbe_state_t {
+ __TXGBE_TESTING,
+ __TXGBE_RESETTING,
+ __TXGBE_DOWN,
+ __TXGBE_HANGING,
+ __TXGBE_DISABLED,
+ __TXGBE_REMOVING,
+ __TXGBE_SERVICE_SCHED,
+ __TXGBE_SERVICE_INITED,
+ __TXGBE_IN_SFP_INIT,
+ __TXGBE_PTP_RUNNING,
+ __TXGBE_PTP_TX_IN_PROGRESS,
+};
+
+struct txgbe_cb {
+ dma_addr_t dma;
+ u16 append_cnt; /* number of skb's appended */
+ bool page_released;
+ bool dma_released;
+};
+#define TXGBE_CB(skb) ((struct txgbe_cb *)(skb)->cb)
+
+/* ESX txgbe CIM IOCTL definition */
+
+extern struct dcbnl_rtnl_ops dcbnl_ops;
+int txgbe_copy_dcb_cfg(struct txgbe_adapter *adapter, int tc_max);
+
+u8 txgbe_dcb_txq_to_tc(struct txgbe_adapter *adapter, u8 index);
+
+/* needed by txgbe_main.c */
+int txgbe_validate_mac_addr(u8 *mc_addr);
+void txgbe_check_options(struct txgbe_adapter *adapter);
+void txgbe_assign_netdev_ops(struct net_device *netdev);
+
+/* needed by txgbe_ethtool.c */
+extern char txgbe_driver_name[];
+extern const char txgbe_driver_version[];
+
+void txgbe_irq_disable(struct txgbe_adapter *adapter);
+void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush);
+int txgbe_open(struct net_device *netdev);
+int txgbe_close(struct net_device *netdev);
+void txgbe_up(struct txgbe_adapter *adapter);
+void txgbe_down(struct txgbe_adapter *adapter);
+void txgbe_reinit_locked(struct txgbe_adapter *adapter);
+void txgbe_reset(struct txgbe_adapter *adapter);
+void txgbe_set_ethtool_ops(struct net_device *netdev);
+int txgbe_setup_rx_resources(struct txgbe_ring *);
+int txgbe_setup_tx_resources(struct txgbe_ring *);
+void txgbe_free_rx_resources(struct txgbe_ring *);
+void txgbe_free_tx_resources(struct txgbe_ring *);
+void txgbe_configure_rx_ring(struct txgbe_adapter *,
+ struct txgbe_ring *);
+void txgbe_configure_tx_ring(struct txgbe_adapter *,
+ struct txgbe_ring *);
+void txgbe_update_stats(struct txgbe_adapter *adapter);
+int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter);
+void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter);
+void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter);
+void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter);
+bool txgbe_is_txgbe(struct pci_dev *pcidev);
+netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *,
+ struct txgbe_adapter *,
+ struct txgbe_ring *);
+void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *,
+ struct txgbe_tx_buffer *);
+void txgbe_alloc_rx_buffers(struct txgbe_ring *, u16);
+void txgbe_configure_rscctl(struct txgbe_adapter *adapter,
+ struct txgbe_ring *);
+void txgbe_clear_rscctl(struct txgbe_adapter *adapter,
+ struct txgbe_ring *);
+void txgbe_clear_vxlan_port(struct txgbe_adapter *);
+void txgbe_set_rx_mode(struct net_device *netdev);
+int txgbe_write_mc_addr_list(struct net_device *netdev);
+int txgbe_setup_tc(struct net_device *dev, u8 tc);
+void txgbe_tx_ctxtdesc(struct txgbe_ring *, u32, u32, u32, u32);
+void txgbe_do_reset(struct net_device *netdev);
+void txgbe_write_eitr(struct txgbe_q_vector *q_vector);
+int txgbe_poll(struct napi_struct *napi, int budget);
+void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
+ struct txgbe_ring *);
+void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter);
+void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter);
+
+void txgbe_dump(struct txgbe_adapter *adapter);
+
+static inline struct netdev_queue *txring_txq(const struct txgbe_ring *ring)
+{
+ return netdev_get_tx_queue(ring->netdev, ring->queue_index);
+}
+
+int txgbe_wol_supported(struct txgbe_adapter *adapter);
+int txgbe_get_settings(struct net_device *netdev,
+ struct ethtool_cmd *ecmd);
+int txgbe_write_uc_addr_list(struct net_device *netdev, int pool);
+void txgbe_full_sync_mac_table(struct txgbe_adapter *adapter);
+int txgbe_add_mac_filter(struct txgbe_adapter *adapter,
+ u8 *addr, u16 pool);
+int txgbe_del_mac_filter(struct txgbe_adapter *adapter,
+ u8 *addr, u16 pool);
+int txgbe_available_rars(struct txgbe_adapter *adapter);
+void txgbe_vlan_mode(struct net_device *, u32);
+
+void txgbe_ptp_init(struct txgbe_adapter *adapter);
+void txgbe_ptp_stop(struct txgbe_adapter *adapter);
+void txgbe_ptp_suspend(struct txgbe_adapter *adapter);
+void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter);
+void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter);
+void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb);
+int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr);
+int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr);
+void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter);
+void txgbe_ptp_reset(struct txgbe_adapter *adapter);
+void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter);
+
+void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter);
+
+u32 txgbe_rss_indir_tbl_entries(struct txgbe_adapter *adapter);
+void txgbe_store_reta(struct txgbe_adapter *adapter);
+
+/**
+ * interrupt masking operations. each bit in PX_ICn correspond to a interrupt.
+ * disable a interrupt by writing to PX_IMS with the corresponding bit=1
+ * enable a interrupt by writing to PX_IMC with the corresponding bit=1
+ * trigger a interrupt by writing to PX_ICS with the corresponding bit=1
+ **/
+#define TXGBE_INTR_ALL (~0ULL)
+#define TXGBE_INTR_MISC(A) (1ULL << (A)->num_q_vectors)
+#define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1)
+#define TXGBE_INTR_Q(i) (1ULL << (i))
+static inline void txgbe_intr_enable(struct txgbe_hw *hw, u64 qmask)
+{
+ u32 mask;
+
+ mask = (qmask & 0xFFFFFFFF);
+ if (mask)
+ wr32(hw, TXGBE_PX_IMC(0), mask);
+ mask = (qmask >> 32);
+ if (mask)
+ wr32(hw, TXGBE_PX_IMC(1), mask);
+
+ /* skip the flush */
+}
+
+static inline void txgbe_intr_disable(struct txgbe_hw *hw, u64 qmask)
+{
+ u32 mask;
+
+ mask = (qmask & 0xFFFFFFFF);
+ if (mask)
+ wr32(hw, TXGBE_PX_IMS(0), mask);
+ mask = (qmask >> 32);
+ if (mask)
+ wr32(hw, TXGBE_PX_IMS(1), mask);
+
+ /* skip the flush */
+}
+
+static inline void txgbe_intr_trigger(struct txgbe_hw *hw, u64 qmask)
+{
+ u32 mask;
+
+ mask = (qmask & 0xFFFFFFFF);
+ if (mask)
+ wr32(hw, TXGBE_PX_ICS(0), mask);
+ mask = (qmask >> 32);
+ if (mask)
+ wr32(hw, TXGBE_PX_ICS(1), mask);
+
+ /* skip the flush */
+}
+
+#define TXGBE_RING_SIZE(R) ((R)->count < TXGBE_MAX_TXD ? (R)->count / 128 : 0)
+
+/* move from txgbe_osdep.h */
+#define TXGBE_CPU_TO_BE16(_x) cpu_to_be16(_x)
+#define TXGBE_BE16_TO_CPU(_x) be16_to_cpu(_x)
+#define TXGBE_CPU_TO_BE32(_x) cpu_to_be32(_x)
+#define TXGBE_BE32_TO_CPU(_x) be32_to_cpu(_x)
+
+#define msec_delay(_x) msleep(_x)
+
+#define usec_delay(_x) udelay(_x)
+
+#define STATIC static
+
+#define TXGBE_NAME "txgbe"
+
+#define DPRINTK(nlevel, klevel, fmt, args...) \
+ ((void)((NETIF_MSG_##nlevel & adapter->msg_enable) && \
+ printk(KERN_##klevel TXGBE_NAME ": %s: %s: " fmt, \
+ adapter->netdev->name, \
+ __func__, ## args)))
+
+#ifndef _WIN32
+#define txgbe_emerg(fmt, ...) printk(KERN_EMERG fmt, ## __VA_ARGS__)
+#define txgbe_alert(fmt, ...) printk(KERN_ALERT fmt, ## __VA_ARGS__)
+#define txgbe_crit(fmt, ...) printk(KERN_CRIT fmt, ## __VA_ARGS__)
+#define txgbe_error(fmt, ...) printk(KERN_ERR fmt, ## __VA_ARGS__)
+#define txgbe_warn(fmt, ...) printk(KERN_WARNING fmt, ## __VA_ARGS__)
+#define txgbe_notice(fmt, ...) printk(KERN_NOTICE fmt, ## __VA_ARGS__)
+#define txgbe_info(fmt, ...) printk(KERN_INFO fmt, ## __VA_ARGS__)
+#define txgbe_print(fmt, ...) printk(KERN_DEBUG fmt, ## __VA_ARGS__)
+#define txgbe_trace(fmt, ...) printk(KERN_INFO fmt, ## __VA_ARGS__)
+#else /* _WIN32 */
+#define txgbe_error(lvl, fmt, ...) \
+ DbgPrintEx(DPFLTR_IHVNETWORK_ID, DPFLTR_ERROR_LEVEL, \
+ "%s-error: %s@%d, " fmt, \
+ "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__)
+#endif /* !_WIN32 */
+
+#ifdef DBG
+#ifndef _WIN32
+#define txgbe_debug(fmt, ...) \
+ printk(KERN_DEBUG \
+ "%s-debug: %s@%d, " fmt, \
+ "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__)
+#else /* _WIN32 */
+#define txgbe_debug(fmt, ...) \
+ DbgPrintEx(DPFLTR_IHVNETWORK_ID, DPFLTR_ERROR_LEVEL, \
+ "%s-debug: %s@%d, " fmt, \
+ "txgbe", __FUNCTION__, __LINE__, ## __VA_ARGS__)
+#endif /* _WIN32 */
+#else /* DBG */
+#define txgbe_debug(fmt, ...) do {} while (0)
+#endif /* DBG */
+
+
+#ifdef DBG
+#define ASSERT(_x) BUG_ON(!(_x))
+#define DEBUGOUT(S) printk(KERN_DEBUG S)
+#define DEBUGOUT1(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGOUT2(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGOUT3(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGOUT4(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGOUT5(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGOUT6(S, A...) printk(KERN_DEBUG S, ## A)
+#define DEBUGFUNC(fmt, ...) txgbe_debug(fmt, ## __VA_ARGS__)
+#else
+#define ASSERT(_x) do {} while (0)
+#define DEBUGOUT(S) do {} while (0)
+#define DEBUGOUT1(S, A...) do {} while (0)
+#define DEBUGOUT2(S, A...) do {} while (0)
+#define DEBUGOUT3(S, A...) do {} while (0)
+#define DEBUGOUT4(S, A...) do {} while (0)
+#define DEBUGOUT5(S, A...) do {} while (0)
+#define DEBUGOUT6(S, A...) do {} while (0)
+#define DEBUGFUNC(fmt, ...) do {} while (0)
+#endif
+
+
+struct txgbe_msg {
+ u16 msg_enable;
+};
+
+__attribute__((unused)) static struct net_device *txgbe_hw_to_netdev(const struct txgbe_hw *hw)
+{
+ return ((struct txgbe_adapter *)hw->back)->netdev;
+}
+
+__attribute__((unused)) static struct txgbe_msg *txgbe_hw_to_msg(const struct txgbe_hw *hw)
+{
+ struct txgbe_adapter *adapter =
+ container_of(hw, struct txgbe_adapter, hw);
+ return (struct txgbe_msg *)&adapter->msg_enable;
+}
+
+static inline struct device *pci_dev_to_dev(struct pci_dev *pdev)
+{
+ return &pdev->dev;
+}
+
+#define hw_dbg(hw, format, arg...) \
+ netdev_dbg(txgbe_hw_to_netdev(hw), format, ## arg)
+#define hw_err(hw, format, arg...) \
+ netdev_err(txgbe_hw_to_netdev(hw), format, ## arg)
+#define e_dev_info(format, arg...) \
+ dev_info(pci_dev_to_dev(adapter->pdev), format, ## arg)
+#define e_dev_warn(format, arg...) \
+ dev_warn(pci_dev_to_dev(adapter->pdev), format, ## arg)
+#define e_dev_err(format, arg...) \
+ dev_err(pci_dev_to_dev(adapter->pdev), format, ## arg)
+#define e_dev_notice(format, arg...) \
+ dev_notice(pci_dev_to_dev(adapter->pdev), format, ## arg)
+#define e_dbg(msglvl, format, arg...) \
+ netif_dbg(adapter, msglvl, adapter->netdev, format, ## arg)
+#define e_info(msglvl, format, arg...) \
+ netif_info(adapter, msglvl, adapter->netdev, format, ## arg)
+#define e_err(msglvl, format, arg...) \
+ netif_err(adapter, msglvl, adapter->netdev, format, ## arg)
+#define e_warn(msglvl, format, arg...) \
+ netif_warn(adapter, msglvl, adapter->netdev, format, ## arg)
+#define e_crit(msglvl, format, arg...) \
+ netif_crit(adapter, msglvl, adapter->netdev, format, ## arg)
+
+#define TXGBE_FAILED_READ_CFG_DWORD 0xffffffffU
+#define TXGBE_FAILED_READ_CFG_WORD 0xffffU
+#define TXGBE_FAILED_READ_CFG_BYTE 0xffU
+
+extern u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet);
+extern u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg);
+extern void txgbe_write_pci_cfg_word(struct txgbe_hw *hw, u32 reg, u16 value);
+
+#define TXGBE_R32_Q(h, r) txgbe_read_reg(h, r, true)
+
+#define TXGBE_EEPROM_GRANT_ATTEMPS 100
+#define TXGBE_HTONL(_i) htonl(_i)
+#define TXGBE_NTOHL(_i) ntohl(_i)
+#define TXGBE_NTOHS(_i) ntohs(_i)
+#define TXGBE_CPU_TO_LE32(_i) cpu_to_le32(_i)
+#define TXGBE_LE32_TO_CPUS(_i) le32_to_cpus(_i)
+
+enum {
+ TXGBE_ERROR_SOFTWARE,
+ TXGBE_ERROR_POLLING,
+ TXGBE_ERROR_INVALID_STATE,
+ TXGBE_ERROR_UNSUPPORTED,
+ TXGBE_ERROR_ARGUMENT,
+ TXGBE_ERROR_CAUTION,
+};
+
+#define ERROR_REPORT(level, format, arg...) do { \
+ switch (level) { \
+ case TXGBE_ERROR_SOFTWARE: \
+ case TXGBE_ERROR_CAUTION: \
+ case TXGBE_ERROR_POLLING: \
+ netif_warn(txgbe_hw_to_msg(hw), drv, txgbe_hw_to_netdev(hw), \
+ format, ## arg); \
+ break; \
+ case TXGBE_ERROR_INVALID_STATE: \
+ case TXGBE_ERROR_UNSUPPORTED: \
+ case TXGBE_ERROR_ARGUMENT: \
+ netif_err(txgbe_hw_to_msg(hw), hw, txgbe_hw_to_netdev(hw), \
+ format, ## arg); \
+ break; \
+ default: \
+ break; \
+ } \
+} while (0)
+
+#define ERROR_REPORT1 ERROR_REPORT
+#define ERROR_REPORT2 ERROR_REPORT
+#define ERROR_REPORT3 ERROR_REPORT
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) do { \
+ uninitialized_var(_p); \
+} while (0)
+#define UNREFERENCED_2PARAMETER(_p, _q) do { \
+ uninitialized_var(_p); \
+ uninitialized_var(_q); \
+} while (0)
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { \
+ uninitialized_var(_p); \
+ uninitialized_var(_q); \
+ uninitialized_var(_r); \
+} while (0)
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) do { \
+ uninitialized_var(_p); \
+ uninitialized_var(_q); \
+ uninitialized_var(_r); \
+ uninitialized_var(_s); \
+} while (0)
+#define UNREFERENCED_PARAMETER(_p) UNREFERENCED_1PARAMETER(_p)
+
+/* end of txgbe_osdep.h */
+
+#endif /* _TXGBE_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
new file mode 100644
index 0000000000000..68d465da2eee8
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.c
@@ -0,0 +1,875 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+
+#include "txgbe_bp.h"
+
+int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter);
+int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter);
+int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner,
+ struct txgbe_adapter *adapter);
+int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner,
+ struct txgbe_adapter *adapter);
+int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter);
+int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter);
+int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability,
+ struct txgbe_adapter *adapter);
+
+void txgbe_bp_close_protect(struct txgbe_adapter *adapter)
+{
+ adapter->flags2 |= TXGBE_FLAG2_KR_PRO_DOWN;
+ if (adapter->flags2 & TXGBE_FLAG2_KR_PRO_REINIT) {
+ msleep(100);
+ printk("wait to reinited ok..%x\n", adapter->flags2);
+ }
+}
+
+int txgbe_bp_mode_setting(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ /*default to open an73*/
+
+ adapter->backplane_an = AUTO ? 1 : 0;
+ adapter->an37 = AUTO ? 1 : 0;
+
+ if (adapter->backplane_mode == TXGBE_BP_M_KR) {
+ hw->subsystem_device_id = TXGBE_ID_WX1820_KR_KX_KX4;
+ hw->subsystem_id = TXGBE_ID_WX1820_KR_KX_KX4;
+ } else if (adapter->backplane_mode == TXGBE_BP_M_KX4) {
+ hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_XAUI;
+ hw->subsystem_id = TXGBE_ID_WX1820_MAC_XAUI;
+ } else if (adapter->backplane_mode == TXGBE_BP_M_KX) {
+ hw->subsystem_device_id = TXGBE_ID_WX1820_MAC_SGMII;
+ hw->subsystem_id = TXGBE_ID_WX1820_MAC_SGMII;
+ } else if (adapter->backplane_mode == TXGBE_BP_M_SFI) {
+ hw->subsystem_device_id = TXGBE_ID_WX1820_SFP;
+ hw->subsystem_id = TXGBE_ID_WX1820_SFP;
+ }
+
+ if (adapter->backplane_auto == TXGBE_BP_M_AUTO) {
+ adapter->backplane_an = 1;
+ adapter->an37 = 1;
+ } else if (adapter->backplane_auto == TXGBE_BP_M_NAUTO) {
+ adapter->backplane_an = 0;
+ adapter->an37 = 0;
+ }
+
+ if (adapter->ffe_set == TXGBE_BP_M_KR ||
+ adapter->ffe_set == TXGBE_BP_M_KX4 ||
+ adapter->ffe_set == TXGBE_BP_M_KX ||
+ adapter->ffe_set == TXGBE_BP_M_SFI) {
+ goto out;
+ }
+
+ if (KR_SET == 1) {
+ adapter->ffe_main = KR_MAIN;
+ adapter->ffe_pre = KR_PRE;
+ adapter->ffe_post = KR_POST;
+ } else if (KX4_SET == 1) {
+ adapter->ffe_main = KX4_MAIN;
+ adapter->ffe_pre = KX4_PRE;
+ adapter->ffe_post = KX4_POST;
+ } else if (KX_SET == 1) {
+ adapter->ffe_main = KX_MAIN;
+ adapter->ffe_pre = KX_PRE;
+ adapter->ffe_post = KX_POST;
+ } else if (SFI_SET == 1) {
+ adapter->ffe_main = SFI_MAIN;
+ adapter->ffe_pre = SFI_PRE;
+ adapter->ffe_post = SFI_POST;
+ }
+out:
+ return 0;
+}
+
+static int txgbe_kr_subtask(struct txgbe_adapter *adapter)
+{
+ Handle_bkp_an73_flow(0, adapter);
+ return 0;
+}
+
+void txgbe_bp_watchdog_event(struct txgbe_adapter *adapter)
+{
+ u32 value = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_POLLING == 1) {
+ value = txgbe_rd32_epcs(hw, 0x78002);
+ value = value & 0x4;
+ if (value == 0x4) {
+ e_dev_info("Enter training\n");
+ txgbe_kr_subtask(adapter);
+ }
+ } else {
+ if (adapter->flags2 & TXGBE_FLAG2_KR_TRAINING) {
+ e_dev_info("Enter training\n");
+ txgbe_kr_subtask(adapter);
+ adapter->flags2 &= ~TXGBE_FLAG2_KR_TRAINING;
+ }
+ }
+}
+
+void txgbe_bp_down_event(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ if (adapter->backplane_an == 1) {
+ if (KR_NORESET == 1) {
+ txgbe_wr32_epcs(hw, 0x78003, 0x0000);
+ txgbe_wr32_epcs(hw, 0x70000, 0x0000);
+ txgbe_wr32_epcs(hw, 0x78001, 0x0000);
+ msleep(1050);
+ txgbe_set_link_to_kr(hw, 1);
+ } else if (KR_REINITED == 1) {
+ txgbe_wr32_epcs(hw, 0x78003, 0x0000);
+ txgbe_wr32_epcs(hw, 0x70000, 0x0000);
+ txgbe_wr32_epcs(hw, 0x78001, 0x0000);
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FF);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FF);
+ msleep(1050);
+ txgbe_wr32_epcs(hw, 0x78003, 0x0001);
+ txgbe_wr32_epcs(hw, 0x70000, 0x3200);
+ txgbe_wr32_epcs(hw, 0x78001, 0x0007);
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FC);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FC);
+ } else {
+ msleep(1000);
+ if (!(adapter->flags2&TXGBE_FLAG2_KR_PRO_DOWN)) {
+ adapter->flags2 |= TXGBE_FLAG2_KR_PRO_REINIT;
+ txgbe_reinit_locked(adapter);
+ adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_REINIT;
+ }
+ }
+ }
+}
+
+int txgbe_kr_intr_handle(struct txgbe_adapter *adapter)
+{
+ bkpan73ability tBkpAn73Ability, tLpBkpAn73Ability;
+ tBkpAn73Ability.currentLinkMode = 0;
+
+ if (KR_MODE) {
+ e_dev_info("HandleBkpAn73Flow() \n");
+ e_dev_info("---------------------------------\n");
+ }
+
+ /*1. Get the local AN73 Base Page Ability*/
+ if (KR_MODE)
+ e_dev_info("<1>. Get the local AN73 Base Page Ability ...\n");
+ GetBkpAn73Ability(&tBkpAn73Ability, 0, adapter);
+
+ /*2. Check the AN73 Interrupt Status*/
+ if (KR_MODE)
+ e_dev_info("<2>. Check the AN73 Interrupt Status ...\n");
+ /*3.Clear the AN_PG_RCV interrupt*/
+ ClearBkpAn73Interrupt(2, 0x0, adapter);
+
+ /*3.1. Get the link partner AN73 Base Page Ability*/
+ if (KR_MODE)
+ e_dev_info("<3.1>. Get the link partner AN73 Base Page Ability ...\n");
+ Get_bkp_an73_ability(&tLpBkpAn73Ability, 1, adapter);
+
+ /*3.2. Check the AN73 Link Ability with Link Partner*/
+ if (KR_MODE) {
+ e_dev_info("<3.2>. Check the AN73 Link Ability with Link Partner ...\n");
+ e_dev_info(" Local Link Ability: 0x%x\n", tBkpAn73Ability.linkAbility);
+ e_dev_info(" Link Partner Link Ability: 0x%x\n", tLpBkpAn73Ability.linkAbility);
+ }
+ Check_bkp_an73_ability(tBkpAn73Ability, tLpBkpAn73Ability, adapter);
+
+ return 0;
+}
+
+/*Check Ethernet Backplane AN73 Base Page Ability
+**return value:
+** -1 : none link mode matched, exit
+** 0 : current link mode matched, wait AN73 to be completed
+** 1 : current link mode not matched, set to matched link mode, re-start AN73 external
+*/
+int Check_bkp_an73_ability(bkpan73ability tBkpAn73Ability, bkpan73ability tLpBkpAn73Ability,
+ struct txgbe_adapter *adapter)
+{
+ unsigned int comLinkAbility;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_MODE) {
+ e_dev_info("CheckBkpAn73Ability():\n");
+ e_dev_info("------------------------\n");
+ }
+
+ /*-- Check the common link ability and take action based on the result*/
+ comLinkAbility = tBkpAn73Ability.linkAbility & tLpBkpAn73Ability.linkAbility;
+ if (KR_MODE)
+ e_dev_info("comLinkAbility= 0x%x, linkAbility= 0x%x, lpLinkAbility= 0x%x\n",
+ comLinkAbility, tBkpAn73Ability.linkAbility, tLpBkpAn73Ability.linkAbility);
+
+ if (comLinkAbility == 0) {
+ if (KR_MODE)
+ e_dev_info("WARNING: The Link Partner does not support any compatible speed mode!!!\n\n");
+ return -1;
+ } else if (comLinkAbility & 0x80) {
+ if (tBkpAn73Ability.currentLinkMode == 0) {
+ if (KR_MODE)
+ e_dev_info("Link mode is matched with Link Partner: [LINK_KR].\n");
+ return 0;
+ } else {
+ if (KR_MODE) {
+ e_dev_info("Link mode is not matched with Link Partner: [LINK_KR].\n");
+ e_dev_info("Set the local link mode to [LINK_KR] ...\n");
+ }
+ txgbe_set_link_to_kr(hw, 1);
+ return 1;
+ }
+ } else if (comLinkAbility & 0x40) {
+ if (tBkpAn73Ability.currentLinkMode == 0x10) {
+ if (KR_MODE)
+ e_dev_info("Link mode is matched with Link Partner: [LINK_KX4].\n");
+ return 0;
+ } else {
+ if (KR_MODE) {
+ e_dev_info("Link mode is not matched with Link Partner: [LINK_KX4].\n");
+ e_dev_info("Set the local link mode to [LINK_KX4] ...\n");
+ }
+ txgbe_set_link_to_kx4(hw, 1);
+ return 1;
+ }
+ } else if (comLinkAbility & 0x20) {
+ if (tBkpAn73Ability.currentLinkMode == 0x1) {
+ if (KR_MODE)
+ e_dev_info("Link mode is matched with Link Partner: [LINK_KX].\n");
+ return 0;
+ } else {
+ if (KR_MODE) {
+ e_dev_info("Link mode is not matched with Link Partner: [LINK_KX].\n");
+ e_dev_info("Set the local link mode to [LINK_KX] ...\n");
+ }
+ txgbe_set_link_to_kx(hw, 1, 1);
+ return 1;
+ }
+ }
+ return 0;
+}
+
+
+/*Get Ethernet Backplane AN73 Base Page Ability
+**byLinkPartner:
+**- 1: Get Link Partner Base Page
+**- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment)
+**- 0: Get Local Device Base Page
+*/
+int Get_bkp_an73_ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner,
+ struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int rdata;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_MODE) {
+ e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner);
+ e_dev_info("----------------------------------------\n");
+ }
+
+ if (byLinkPartner == 1) { /*Link Partner Base Page*/
+ /*Read the link partner AN73 Base Page Ability Registers*/
+ if (KR_MODE)
+ e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70013);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70014);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata);
+ ptBkpAn73Ability->linkAbility = rdata & 0xE0;
+ if (KR_MODE) {
+ e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility);
+ e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n");
+ e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n");
+ }
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70015);
+ if (KR_MODE) {
+ e_dev_info("SR AN MMD LP Base Page Ability Register 3: 0x%x\n", rdata);
+ e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01));
+ e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01));
+ }
+ ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03;
+ } else if (byLinkPartner == 2) {/*Link Partner Next Page*/
+ /*Read the link partner AN73 Next Page Ability Registers*/
+ if (KR_MODE)
+ e_dev_info("\nRead the link partner AN73 Next Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70019);
+ if (KR_MODE)
+ e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+ } else {
+ /*Read the local AN73 Base Page Ability Registers*/
+ if (KR_MODE)
+ e_dev_info("\nRead the local AN73 Base Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70010);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70011);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata);
+ ptBkpAn73Ability->linkAbility = rdata & 0xE0;
+ if (KR_MODE) {
+ e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility);
+ e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n");
+ e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n");
+ }
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70012);
+ if (KR_MODE) {
+ e_dev_info("SR AN MMD Advertisement Register 3: 0x%x\n", rdata);
+ e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01));
+ e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01));
+ }
+ ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03;
+ } /*if (byLinkPartner == 1) Link Partner Base Page*/
+
+ if (KR_MODE)
+ e_dev_info("GetBkpAn73Ability() done.\n");
+
+ return status;
+}
+
+
+/*Get Ethernet Backplane AN73 Base Page Ability
+**byLinkPartner:
+**- 1: Get Link Partner Base Page
+**- 2: Get Link Partner Next Page (only get NXP Ability Register 1 at the moment)
+**- 0: Get Local Device Base Page
+*/
+int GetBkpAn73Ability(bkpan73ability *ptBkpAn73Ability, unsigned char byLinkPartner,
+ struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int rdata;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_MODE) {
+ e_dev_info("GetBkpAn73Ability(): byLinkPartner = %d\n", byLinkPartner);
+ e_dev_info("----------------------------------------\n");
+ }
+
+ if (byLinkPartner == 1) { //Link Partner Base Page
+ //Read the link partner AN73 Base Page Ability Registers
+ if (KR_MODE)
+ e_dev_info("Read the link partner AN73 Base Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70013);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD LP Base Page Ability Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70014);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD LP Base Page Ability Register 2: 0x%x\n", rdata);
+ ptBkpAn73Ability->linkAbility = rdata & 0xE0;
+ if (KR_MODE) {
+ e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility);
+ e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n");
+ e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n");
+ }
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70015);
+ printk("SR AN MMD LP Base Page Ability Register 3: 0x%x\n", rdata);
+ printk(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01));
+ printk(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01));
+ ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03;
+ } else if (byLinkPartner == 2) { //Link Partner Next Page
+ //Read the link partner AN73 Next Page Ability Registers
+ if (KR_MODE)
+ e_dev_info("Read the link partner AN73 Next Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70019);
+ if (KR_MODE)
+ e_dev_info(" SR AN MMD LP XNP Ability Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+ } else {
+ //Read the local AN73 Base Page Ability Registers
+ if (KR_MODE)
+ e_dev_info("Read the local AN73 Base Page Ability Registers...\n");
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70010);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD Advertisement Register 1: 0x%x\n", rdata);
+ ptBkpAn73Ability->nextPage = (rdata >> 15) & 0x01;
+ if (KR_MODE)
+ e_dev_info(" Next Page (bit15): %d\n", ptBkpAn73Ability->nextPage);
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70011);
+ if (KR_MODE)
+ e_dev_info("SR AN MMD Advertisement Register 2: 0x%x\n", rdata);
+ ptBkpAn73Ability->linkAbility = rdata & 0xE0;
+ if (KR_MODE) {
+ e_dev_info(" Link Ability (bit[15:0]): 0x%x\n", ptBkpAn73Ability->linkAbility);
+ e_dev_info(" (0x20- KX_ONLY, 0x40- KX4_ONLY, 0x60- KX4_KX\n");
+ e_dev_info(" 0x80- KR_ONLY, 0xA0- KR_KX, 0xC0- KR_KX4, 0xE0- KR_KX4_KX)\n");
+ }
+
+ rdata = 0;
+ rdata = txgbe_rd32_epcs(hw, 0x70012);
+ if (KR_MODE) {
+ e_dev_info("SR AN MMD Advertisement Register 3: 0x%x\n", rdata);
+ e_dev_info(" FEC Request (bit15): %d\n", ((rdata >> 15) & 0x01));
+ e_dev_info(" FEC Enable (bit14): %d\n", ((rdata >> 14) & 0x01));
+ }
+ ptBkpAn73Ability->fecAbility = (rdata >> 14) & 0x03;
+ }
+
+ if (KR_MODE)
+ e_dev_info("GetBkpAn73Ability() done.\n");
+
+ return status;
+}
+
+/* DESCRIPTION: Set the source data fields[bitHigh:bitLow] with setValue
+** INPUTS: *pSrcData: Source data pointer
+** bitHigh: High bit position of the fields
+** bitLow : Low bit position of the fields
+** setValue: Set value of the fields
+** OUTPUTS: return the updated source data
+*/
+static void SetFields(
+ unsigned int *pSrcData,
+ unsigned int bitHigh,
+ unsigned int bitLow,
+ unsigned int setValue)
+{
+ int i;
+
+ if (bitHigh == bitLow) {
+ if (setValue == 0) {
+ *pSrcData &= ~(1 << bitLow);
+ } else {
+ *pSrcData |= (1 << bitLow);
+ }
+ } else {
+ for (i = bitLow; i <= bitHigh; i++) {
+ *pSrcData &= ~(1 << i);
+ }
+ *pSrcData |= (setValue << bitLow);
+ }
+}
+
+/*Check Ethernet Backplane AN73 Interrupt status
+**- return the value of select interrupt index
+*/
+int CheckBkpAn73Interrupt(unsigned int intIndex, struct txgbe_adapter *adapter)
+{
+ unsigned int rdata;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_MODE) {
+ e_dev_info("CheckBkpAn73Interrupt(): intIndex = %d\n", intIndex);
+ e_dev_info("----------------------------------------\n");
+ }
+
+ rdata = 0x0000;
+ rdata = txgbe_rd32_epcs(hw, 0x78002);
+ if (KR_MODE) {
+ e_dev_info("Read VR AN MMD Interrupt Register: 0x%x\n", rdata);
+ e_dev_info("Interrupt: 0- AN_INT_CMPLT, 1- AN_INC_LINK, 2- AN_PG_RCV\n\n");
+ }
+
+ return ((rdata >> intIndex) & 0x01);
+}
+
+/*Clear Ethernet Backplane AN73 Interrupt status
+**- intIndexHi =0, only intIndex bit will be cleared
+**- intIndexHi !=0, the [intIndexHi, intIndex] range will be cleared
+*/
+int ClearBkpAn73Interrupt(unsigned int intIndex, unsigned int intIndexHi, struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int rdata, wdata;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (KR_MODE) {
+ e_dev_info("ClearBkpAn73Interrupt(): intIndex = %d\n", intIndex);
+ e_dev_info("----------------------------------------\n");
+ }
+
+ rdata = 0x0000;
+ rdata = txgbe_rd32_epcs(hw, 0x78002);
+ if (KR_MODE)
+ e_dev_info("[Before clear] Read VR AN MMD Interrupt Register: 0x%x\n", rdata);
+
+ wdata = rdata;
+ if (intIndexHi) {
+ SetFields(&wdata, intIndexHi, intIndex, 0);
+ } else {
+ SetFields(&wdata, intIndex, intIndex, 0);
+ }
+ txgbe_wr32_epcs(hw, 0x78002, wdata);
+
+ rdata = 0x0000;
+ rdata = txgbe_rd32_epcs(hw, 0x78002);
+ if (KR_MODE) {
+ e_dev_info("[After clear] Read VR AN MMD Interrupt Register: 0x%x\n", rdata);
+ e_dev_info("\n");
+ }
+
+ return status;
+}
+
+int WaitBkpAn73XnpDone(struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int timer = 0;
+ bkpan73ability tLpBkpAn73Ability;
+
+ /*while(timer++ < BKPAN73_TIMEOUT)*/
+ while (timer++ < 20) {
+ if (CheckBkpAn73Interrupt(2, adapter)) {
+ /*Clear the AN_PG_RCV interrupt*/
+ ClearBkpAn73Interrupt(2, 0, adapter);
+
+ /*Get the link partner AN73 Next Page Ability*/
+ Get_bkp_an73_ability(&tLpBkpAn73Ability, 2, adapter);
+
+ /*Return when AN_LP_XNP_NP == 0, (bit[15]: Next Page)*/
+ if (tLpBkpAn73Ability.nextPage == 0) {
+ return status;
+ }
+ }
+ msleep(200);
+ } /*while(timer++ < BKPAN73_TIMEOUT)*/
+ if (KR_MODE)
+ e_dev_info("ERROR: Wait all the AN73 next pages to be exchanged Timeout!!!\n");
+
+ return -1;
+}
+
+int ReadPhyLaneTxEq(unsigned short lane, struct txgbe_adapter *adapter, int post_t, int mode)
+{
+ int status = 0;
+ unsigned int addr, rdata;
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 pre;
+ u32 post;
+ u32 lmain;
+
+ /*LANEN_DIG_ASIC_TX_ASIC_IN_1[11:6]: TX_MAIN_CURSOR*/
+ rdata = 0;
+ addr = 0x100E | (lane << 8);
+ rdata = rd32_ephy(hw, addr);
+ if (KR_MODE) {
+ e_dev_info("PHY LANE%0d TX EQ Read Value:\n", lane);
+ e_dev_info(" TX_MAIN_CURSOR: %d\n", ((rdata >> 6) & 0x3F));
+ }
+
+ /*LANEN_DIG_ASIC_TX_ASIC_IN_2[5 :0]: TX_PRE_CURSOR*/
+ /*LANEN_DIG_ASIC_TX_ASIC_IN_2[11:6]: TX_POST_CURSOR*/
+ rdata = 0;
+ addr = 0x100F | (lane << 8);
+ rdata = rd32_ephy(hw, addr);
+ if (KR_MODE) {
+ e_dev_info(" TX_PRE_CURSOR : %d\n", (rdata & 0x3F));
+ e_dev_info(" TX_POST_CURSOR: %d\n", ((rdata >> 6) & 0x3F));
+ e_dev_info("**********************************************\n");
+ }
+
+ if (mode == 1) {
+ pre = (rdata & 0x3F);
+ post = ((rdata >> 6) & 0x3F);
+ if ((160 - pre -post) < 88)
+ lmain = 88;
+ else
+ lmain = 160 - pre - post;
+ if (post_t != 0)
+ post = post_t;
+ txgbe_wr32_epcs(hw, 0x1803b, post);
+ txgbe_wr32_epcs(hw, 0x1803a, pre | (lmain << 8));
+ txgbe_wr32_epcs(hw, 0x18037, txgbe_rd32_epcs(hw, 0x18037) & 0xff7f);
+ }
+ if (KR_MODE)
+ e_dev_info("**********************************************\n");
+
+ return status;
+}
+
+
+/*Enable Clause 72 KR training
+**
+**Note:
+**<1>. The Clause 72 start-up protocol should be initiated when all pages are exchanged during Clause 73 auto-
+**negotiation and when the auto-negotiation process is waiting for link status to be UP for 500 ms after
+**exchanging all the pages.
+**
+**<2>. The local device and link partner should be enabled the CL72 KR training
+**with in 500ms
+**
+**enable:
+**- bits[1:0] =2'b11: Enable the CL72 KR training
+**- bits[1:0] =2'b01: Disable the CL72 KR training
+*/
+int EnableCl72KrTr(unsigned int enable, struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int wdata = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (enable == 1) {
+ if (KR_MODE)
+ e_dev_info("\nDisable Clause 72 KR Training ...\n");
+ status |= ReadPhyLaneTxEq(0, adapter, 0, 0);
+ } else if (enable == 4) {
+ status |= ReadPhyLaneTxEq(0, adapter, 20, 1);
+ } else if (enable == 8) {
+ status |= ReadPhyLaneTxEq(0, adapter, 16, 1);
+ } else if (enable == 12) {
+ status |= ReadPhyLaneTxEq(0, adapter, 24, 1);
+ } else if (enable == 5) {
+ status |= ReadPhyLaneTxEq(0, adapter, 0, 1);
+ } else if (enable == 3) {
+ if (KR_MODE)
+ e_dev_info("\nEnable Clause 72 KR Training ...\n");
+
+ if (CL72_KRTR_PRBS_MODE_EN != 0xffff) {
+ /*Set PRBS Timer Duration Control to maximum 6.7ms in VR_PMA_KRTR_PRBS_CTRL1 Register*/
+ wdata = CL72_KRTR_PRBS_MODE_EN;
+ txgbe_wr32_epcs(hw, 0x18005, wdata);
+ /*Set PRBS Timer Duration Control to maximum 6.7ms in VR_PMA_KRTR_PRBS_CTRL1 Register*/
+ wdata = 0xFFFF;
+ txgbe_wr32_epcs(hw, 0x18004, wdata);
+
+ /*Enable PRBS Mode to determine KR Training Status by setting Bit 0 of VR_PMA_KRTR_PRBS_CTRL0 Register*/
+ wdata = 0;
+ SetFields(&wdata, 0, 0, 1);
+ }
+
+#ifdef CL72_KRTR_PRBS31_EN
+ /*Enable PRBS31 as the KR Training Pattern by setting Bit 1 of VR_PMA_KRTR_PRBS_CTRL0 Register*/
+ SetFields(&wdata, 1, 1, 1);
+#endif /*#ifdef CL72_KRTR_PRBS31_EN*/
+ txgbe_wr32_epcs(hw, 0x18003, wdata);
+ status |= ReadPhyLaneTxEq(0, adapter, 0, 0);
+ } else {
+ if (KR_MODE)
+ e_dev_info("\nInvalid setting for Clause 72 KR Training!!!\n");
+ return -1;
+ }
+
+ /*Enable the Clause 72 start-up protocol by setting Bit 1 of SR_PMA_KR_PMD_CTRL Register.
+ **Restart the Clause 72 start-up protocol by setting Bit 0 of SR_PMA_KR_PMD_CTRL Register*/
+ wdata = enable;
+ txgbe_wr32_epcs(hw, 0x10096, wdata);
+ return status;
+}
+
+int CheckCl72KrTrStatus(struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int addr, rdata, rdata1;
+ unsigned int timer = 0, times = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ times = KR_POLLING ? 35 : 20;
+
+ /*While loop to check clause 72 KR training status*/
+ while (timer++ < times) {
+ //Get the latest received coefficient update or status
+ rdata = 0;
+ addr = 0x010098;
+ rdata = txgbe_rd32_epcs(hw, addr);
+ if (KR_MODE)
+ e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Update Register: 0x%x\n", rdata);
+
+ rdata = 0;
+ addr = 0x010099;
+ rdata = txgbe_rd32_epcs(hw, addr);
+ if (KR_MODE)
+ e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata);
+
+ rdata = 0;
+ addr = 0x01009a;
+ rdata = txgbe_rd32_epcs(hw, addr);
+ if (KR_MODE)
+ e_dev_info("SR PMA MMD 10GBASE-KR LD Coefficient Update: 0x%x\n", rdata);
+
+ rdata = 0;
+ addr = 0x01009b;
+ rdata = txgbe_rd32_epcs(hw, addr);
+ if (KR_MODE)
+ e_dev_info(" SR PMA MMD 10GBASE-KR LD Coefficient Status: 0x%x\n", rdata);
+
+ rdata = 0;
+ addr = 0x010097;
+ rdata = txgbe_rd32_epcs(hw, addr);
+ if (KR_MODE) {
+ e_dev_info("SR PMA MMD 10GBASE-KR Status Register: 0x%x\n", rdata);
+ e_dev_info(" Training Failure (bit3): %d\n", ((rdata >> 3) & 0x01));
+ e_dev_info(" Start-Up Protocol Status (bit2): %d\n", ((rdata >> 2) & 0x01));
+ e_dev_info(" Frame Lock (bit1): %d\n", ((rdata >> 1) & 0x01));
+ e_dev_info(" Receiver Status (bit0): %d\n", ((rdata >> 0) & 0x01));
+ }
+
+ rdata1 = txgbe_rd32_epcs(hw, 0x10099) & 0x8000;
+ if (rdata1 == 0x8000) {
+ adapter->flags2 |= KR;
+ if (KR_MODE)
+ e_dev_info("TEST Coefficient Status Register: 0x%x\n", rdata);
+ }
+ /*If bit3 is set, Training is completed with failure*/
+ if ((rdata >> 3) & 0x01) {
+ if (KR_MODE)
+ e_dev_info("Training is completed with failure!!!\n");
+ status |= ReadPhyLaneTxEq(0, adapter, 0, 0);
+ return status;
+ }
+
+ /*If bit0 is set, Receiver trained and ready to receive data*/
+ if ((rdata >> 0) & 0x01) {
+ if (KR_MODE)
+ e_dev_info("Receiver trained and ready to receive data ^_^\n");
+ status |= ReadPhyLaneTxEq(0, adapter, 0, 0);
+ return status;
+ }
+
+ msleep(20);
+ }
+
+ if (KR_MODE)
+ e_dev_info("ERROR: Check Clause 72 KR Training Complete Timeout!!!\n");
+
+ return status;
+}
+
+int Handle_bkp_an73_flow(unsigned char byLinkMode, struct txgbe_adapter *adapter)
+{
+ int status = 0;
+ unsigned int timer = 0;
+ unsigned int addr, data;
+ bkpan73ability tBkpAn73Ability, tLpBkpAn73Ability;
+ u32 i = 0;
+ u32 rdata = 0;
+ u32 rdata1 = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+ tBkpAn73Ability.currentLinkMode = byLinkMode;
+
+ if (KR_MODE) {
+ e_dev_info("HandleBkpAn73Flow() \n");
+ e_dev_info("---------------------------------\n");
+ }
+
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+ txgbe_wr32_epcs(hw, 0x78003, 0x0);
+
+ /*Check the FEC and KR Training for KR mode*/
+ if (1) {
+ //FEC handling
+ if (KR_MODE)
+ e_dev_info("<3.3>. Check the FEC for KR mode ...\n");
+ tBkpAn73Ability.fecAbility = 0x03;
+ tLpBkpAn73Ability.fecAbility = 0x0;
+ if ((tBkpAn73Ability.fecAbility & tLpBkpAn73Ability.fecAbility) == 0x03) {
+ if (KR_MODE)
+ e_dev_info("Enable the Backplane KR FEC ...\n");
+ //Write 1 to SR_PMA_KR_FEC_CTRL bit0 to enable the FEC
+ data = 1;
+ addr = 0x100ab; //SR_PMA_KR_FEC_CTRL
+ txgbe_wr32_epcs(hw, addr, data);
+ } else {
+ if (KR_MODE)
+ e_dev_info("Backplane KR FEC is disabled.\n");
+ }
+#ifdef CL72_KR_TRAINING_ON
+ for (i = 0; i < 2; i++) {
+ if (KR_MODE) {
+ e_dev_info("\n<3.4>. Check the CL72 KR Training for KR mode ...\n");
+ printk("===================%d=======================\n", i);
+ }
+
+ status |= EnableCl72KrTr(3, adapter);
+
+ if (KR_MODE)
+ e_dev_info("\nCheck the Clause 72 KR Training status ...\n");
+ status |= CheckCl72KrTrStatus(adapter);
+
+ rdata = txgbe_rd32_epcs(hw, 0x10099) & 0x8000;
+ if (KR_MODE)
+ e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata);
+ rdata1 = txgbe_rd32_epcs(hw, 0x1009b) & 0x8000;
+ if (KR_MODE)
+ e_dev_info("SR PMA MMD 10GBASE-KR LP Coefficient Status Register: 0x%x\n", rdata1);
+ if (KR_POLLING == 0) {
+ if (adapter->flags2 & KR) {
+ rdata = 0x8000;
+ adapter->flags2 &= ~KR;
+ }
+ }
+ if ((rdata == 0x8000) & (rdata1 == 0x8000)) {
+ if (KR_MODE)
+ e_dev_info("====================out===========================\n");
+ status |= EnableCl72KrTr(1, adapter);
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000);
+ ClearBkpAn73Interrupt(2, 0, adapter);
+ ClearBkpAn73Interrupt(1, 0, adapter);
+ ClearBkpAn73Interrupt(0, 0, adapter);
+ while (timer++ < 10) {
+ rdata = txgbe_rd32_epcs(hw, 0x30020);
+ rdata = rdata & 0x1000;
+ if (rdata == 0x1000) {
+ if (KR_MODE)
+ e_dev_info("\nINT_AN_INT_CMPLT =1, AN73 Done Success.\n");
+ e_dev_info("AN73 Done Success.\n");
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000);
+ return 0;
+ }
+ msleep(10);
+ }
+ msleep(1000);
+ txgbe_set_link_to_kr(hw, 1);
+
+ return 0;
+ }
+
+ status |= EnableCl72KrTr(1, adapter);
+ }
+#endif
+ }
+ ClearBkpAn73Interrupt(0, 0, adapter);
+ ClearBkpAn73Interrupt(1, 0, adapter);
+ ClearBkpAn73Interrupt(2, 0, adapter);
+
+ return status;
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h
new file mode 100644
index 0000000000000..c5f0dc5072164
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_bp.h
@@ -0,0 +1,41 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+
+#ifndef _TXGBE_BP_H_
+#define _TXGBE_BP_H_
+
+#include "txgbe.h"
+#include "txgbe_hw.h"
+
+#define CL72_KR_TRAINING_ON
+
+/* Backplane AN73 Base Page Ability struct*/
+typedef struct TBKPAN73ABILITY {
+ unsigned int nextPage; //Next Page (bit0)
+ unsigned int linkAbility; //Link Ability (bit[7:0])
+ unsigned int fecAbility; //FEC Request (bit1), FEC Enable (bit0)
+ unsigned int currentLinkMode; //current link mode for local device
+} bkpan73ability;
+
+int txgbe_kr_intr_handle(struct txgbe_adapter *adapter);
+void txgbe_bp_down_event(struct txgbe_adapter *adapter);
+void txgbe_bp_watchdog_event(struct txgbe_adapter *adapter);
+int txgbe_bp_mode_setting(struct txgbe_adapter *adapter);
+void txgbe_bp_close_protect(struct txgbe_adapter *adapter);
+
+#endif
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h b/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h
new file mode 100644
index 0000000000000..495460e1db8c7
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_dcb.h
@@ -0,0 +1,30 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_dcb.h, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#ifndef _TXGBE_DCB_H_
+#define _TXGBE_DCB_H_
+
+#include "txgbe_type.h"
+
+#endif /* _TXGBE_DCB_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
new file mode 100644
index 0000000000000..5cb8ef61e04b3
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
@@ -0,0 +1,3381 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_ethtool.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+/* ethtool support for txgbe */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/firmware.h>
+#include <linux/net_tstamp.h>
+#include <asm/uaccess.h>
+
+#include "txgbe.h"
+#include "txgbe_hw.h"
+#include "txgbe_phy.h"
+
+#define TXGBE_ALL_RAR_ENTRIES 16
+
+struct txgbe_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ int sizeof_stat;
+ int stat_offset;
+};
+
+#define TXGBE_NETDEV_STAT(_net_stat) { \
+ .stat_string = #_net_stat, \
+ .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
+ .stat_offset = offsetof(struct net_device_stats, _net_stat) \
+}
+static const struct txgbe_stats txgbe_gstrings_net_stats[] = {
+ TXGBE_NETDEV_STAT(rx_packets),
+ TXGBE_NETDEV_STAT(tx_packets),
+ TXGBE_NETDEV_STAT(rx_bytes),
+ TXGBE_NETDEV_STAT(tx_bytes),
+ TXGBE_NETDEV_STAT(rx_errors),
+ TXGBE_NETDEV_STAT(tx_errors),
+ TXGBE_NETDEV_STAT(rx_dropped),
+ TXGBE_NETDEV_STAT(tx_dropped),
+ TXGBE_NETDEV_STAT(multicast),
+ TXGBE_NETDEV_STAT(collisions),
+ TXGBE_NETDEV_STAT(rx_over_errors),
+ TXGBE_NETDEV_STAT(rx_crc_errors),
+ TXGBE_NETDEV_STAT(rx_frame_errors),
+ TXGBE_NETDEV_STAT(rx_fifo_errors),
+ TXGBE_NETDEV_STAT(rx_missed_errors),
+ TXGBE_NETDEV_STAT(tx_aborted_errors),
+ TXGBE_NETDEV_STAT(tx_carrier_errors),
+ TXGBE_NETDEV_STAT(tx_fifo_errors),
+ TXGBE_NETDEV_STAT(tx_heartbeat_errors),
+};
+
+#define TXGBE_STAT(_name, _stat) { \
+ .stat_string = _name, \
+ .sizeof_stat = FIELD_SIZEOF(struct txgbe_adapter, _stat), \
+ .stat_offset = offsetof(struct txgbe_adapter, _stat) \
+}
+static struct txgbe_stats txgbe_gstrings_stats[] = {
+ TXGBE_STAT("rx_pkts_nic", stats.gprc),
+ TXGBE_STAT("tx_pkts_nic", stats.gptc),
+ TXGBE_STAT("rx_bytes_nic", stats.gorc),
+ TXGBE_STAT("tx_bytes_nic", stats.gotc),
+ TXGBE_STAT("lsc_int", lsc_int),
+ TXGBE_STAT("tx_busy", tx_busy),
+ TXGBE_STAT("non_eop_descs", non_eop_descs),
+ TXGBE_STAT("broadcast", stats.bprc),
+ TXGBE_STAT("rx_no_buffer_count", stats.rnbc[0]),
+ TXGBE_STAT("tx_timeout_count", tx_timeout_count),
+ TXGBE_STAT("tx_restart_queue", restart_queue),
+ TXGBE_STAT("rx_long_length_count", stats.roc),
+ TXGBE_STAT("rx_short_length_count", stats.ruc),
+ TXGBE_STAT("tx_flow_control_xon", stats.lxontxc),
+ TXGBE_STAT("rx_flow_control_xon", stats.lxonrxc),
+ TXGBE_STAT("tx_flow_control_xoff", stats.lxofftxc),
+ TXGBE_STAT("rx_flow_control_xoff", stats.lxoffrxc),
+ TXGBE_STAT("rx_csum_offload_good_count", hw_csum_rx_good),
+ TXGBE_STAT("rx_csum_offload_errors", hw_csum_rx_error),
+ TXGBE_STAT("alloc_rx_page_failed", alloc_rx_page_failed),
+ TXGBE_STAT("alloc_rx_buff_failed", alloc_rx_buff_failed),
+ TXGBE_STAT("rx_no_dma_resources", hw_rx_no_dma_resources),
+ TXGBE_STAT("hw_rsc_aggregated", rsc_total_count),
+ TXGBE_STAT("hw_rsc_flushed", rsc_total_flush),
+ TXGBE_STAT("fdir_match", stats.fdirmatch),
+ TXGBE_STAT("fdir_miss", stats.fdirmiss),
+ TXGBE_STAT("fdir_overflow", fdir_overflow),
+ TXGBE_STAT("os2bmc_rx_by_bmc", stats.o2bgptc),
+ TXGBE_STAT("os2bmc_tx_by_bmc", stats.b2ospc),
+ TXGBE_STAT("os2bmc_tx_by_host", stats.o2bspc),
+ TXGBE_STAT("os2bmc_rx_by_host", stats.b2ogprc),
+ TXGBE_STAT("tx_hwtstamp_timeouts", tx_hwtstamp_timeouts),
+ TXGBE_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared),
+};
+
+/* txgbe allocates num_tx_queues and num_rx_queues symmetrically so
+ * we set the num_rx_queues to evaluate to num_tx_queues. This is
+ * used because we do not have a good way to get the max number of
+ * rx queues with CONFIG_RPS disabled.
+ */
+#define TXGBE_NUM_RX_QUEUES netdev->num_tx_queues
+#define TXGBE_NUM_TX_QUEUES netdev->num_tx_queues
+
+#define TXGBE_QUEUE_STATS_LEN ( \
+ (TXGBE_NUM_TX_QUEUES + TXGBE_NUM_RX_QUEUES) * \
+ (sizeof(struct txgbe_queue_stats) / sizeof(u64)))
+#define TXGBE_GLOBAL_STATS_LEN ARRAY_SIZE(txgbe_gstrings_stats)
+#define TXGBE_NETDEV_STATS_LEN ARRAY_SIZE(txgbe_gstrings_net_stats)
+#define TXGBE_PB_STATS_LEN ( \
+ (sizeof(((struct txgbe_adapter *)0)->stats.pxonrxc) + \
+ sizeof(((struct txgbe_adapter *)0)->stats.pxontxc) + \
+ sizeof(((struct txgbe_adapter *)0)->stats.pxoffrxc) + \
+ sizeof(((struct txgbe_adapter *)0)->stats.pxofftxc)) \
+ / sizeof(u64))
+#define TXGBE_VF_STATS_LEN \
+ ((((struct txgbe_adapter *)netdev_priv(netdev))->num_vfs) * \
+ (sizeof(struct vf_stats) / sizeof(u64)))
+#define TXGBE_STATS_LEN (TXGBE_GLOBAL_STATS_LEN + \
+ TXGBE_NETDEV_STATS_LEN + \
+ TXGBE_PB_STATS_LEN + \
+ TXGBE_QUEUE_STATS_LEN + \
+ TXGBE_VF_STATS_LEN)
+
+static const char txgbe_gstrings_test[][ETH_GSTRING_LEN] = {
+ "Register test (offline)", "Eeprom test (offline)",
+ "Interrupt test (offline)", "Loopback test (offline)",
+ "Link test (on/offline)"
+};
+#define TXGBE_TEST_LEN (sizeof(txgbe_gstrings_test) / ETH_GSTRING_LEN)
+
+/* currently supported speeds for 10G */
+#define ADVERTISED_MASK_10G (SUPPORTED_10000baseT_Full | \
+ SUPPORTED_10000baseKX4_Full | \
+ SUPPORTED_10000baseKR_Full)
+
+#define txgbe_isbackplane(type) \
+ ((type == txgbe_media_type_backplane) ? true : false)
+
+static __u32 txgbe_backplane_type(struct txgbe_hw *hw)
+{
+ __u32 mode = 0x00;
+ switch (hw->phy.link_mode) {
+ case TXGBE_PHYSICAL_LAYER_10GBASE_KX4:
+ mode = SUPPORTED_10000baseKX4_Full;
+ break;
+ case TXGBE_PHYSICAL_LAYER_10GBASE_KR:
+ mode = SUPPORTED_10000baseKR_Full;
+ break;
+ case TXGBE_PHYSICAL_LAYER_1000BASE_KX:
+ mode = SUPPORTED_1000baseKX_Full;
+ break;
+ default:
+ mode = (SUPPORTED_10000baseKX4_Full |
+ SUPPORTED_10000baseKR_Full |
+ SUPPORTED_1000baseKX_Full);
+ break;
+ }
+ return mode;
+}
+
+int txgbe_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 supported_link;
+ u32 link_speed = 0;
+ bool autoneg = false;
+ u32 supported, advertising;
+ bool link_up;
+
+ ethtool_convert_link_mode_to_legacy_u32(&supported,
+ cmd->link_modes.supported);
+
+ TCALL(hw, mac.ops.get_link_capabilities, &supported_link, &autoneg);
+
+ if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+ autoneg = adapter->backplane_an ? 1:0;
+ else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII)
+ autoneg = adapter->an37?1:0;
+
+ /* set the supported link speeds */
+ if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+ supported |= (txgbe_isbackplane(hw->phy.media_type)) ?
+ txgbe_backplane_type(hw) : SUPPORTED_10000baseT_Full;
+ if (supported_link & TXGBE_LINK_SPEED_1GB_FULL)
+ supported |= (txgbe_isbackplane(hw->phy.media_type)) ?
+ SUPPORTED_1000baseKX_Full : SUPPORTED_1000baseT_Full;
+ if (supported_link & TXGBE_LINK_SPEED_100_FULL)
+ supported |= SUPPORTED_100baseT_Full;
+ if (supported_link & TXGBE_LINK_SPEED_10_FULL)
+ supported |= SUPPORTED_10baseT_Full;
+
+ /* default advertised speed if phy.autoneg_advertised isn't set */
+ advertising = supported;
+
+ /* set the advertised speeds */
+ if (hw->phy.autoneg_advertised) {
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL)
+ advertising |= ADVERTISED_100baseT_Full;
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL)
+ advertising |= (supported & ADVERTISED_MASK_10G);
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) {
+ if (supported & SUPPORTED_1000baseKX_Full)
+ advertising |= ADVERTISED_1000baseKX_Full;
+ else
+ advertising |= ADVERTISED_1000baseT_Full;
+ }
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL)
+ advertising |= ADVERTISED_10baseT_Full;
+ } else {
+ /* default modes in case phy.autoneg_advertised isn't set */
+ if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+ advertising |= ADVERTISED_10000baseT_Full;
+ if (supported_link & TXGBE_LINK_SPEED_1GB_FULL)
+ advertising |= ADVERTISED_1000baseT_Full;
+ if (supported_link & TXGBE_LINK_SPEED_100_FULL)
+ advertising |= ADVERTISED_100baseT_Full;
+ if (hw->phy.multispeed_fiber && !autoneg) {
+ if (supported_link & TXGBE_LINK_SPEED_10GB_FULL)
+ advertising = ADVERTISED_10000baseT_Full;
+ }
+ if (supported_link & TXGBE_LINK_SPEED_10_FULL)
+ advertising |= ADVERTISED_10baseT_Full;
+ }
+
+ if (autoneg) {
+ supported |= SUPPORTED_Autoneg;
+ advertising |= ADVERTISED_Autoneg;
+ cmd->base.autoneg = AUTONEG_ENABLE;
+ } else
+ cmd->base.autoneg = AUTONEG_DISABLE;
+
+ /* Determine the remaining settings based on the PHY type. */
+ switch (adapter->hw.phy.type) {
+ case txgbe_phy_tn:
+ case txgbe_phy_aq:
+ case txgbe_phy_cu_unknown:
+ supported |= SUPPORTED_TP;
+ advertising |= ADVERTISED_TP;
+ cmd->base.port = PORT_TP;
+ break;
+ case txgbe_phy_qt:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_FIBRE;
+ break;
+ case txgbe_phy_nl:
+ case txgbe_phy_sfp_passive_tyco:
+ case txgbe_phy_sfp_passive_unknown:
+ case txgbe_phy_sfp_ftl:
+ case txgbe_phy_sfp_avago:
+ case txgbe_phy_sfp_intel:
+ case txgbe_phy_sfp_unknown:
+ switch (adapter->hw.phy.sfp_type) {
+ /* SFP+ devices, further checking needed */
+ case txgbe_sfp_type_da_cu:
+ case txgbe_sfp_type_da_cu_core0:
+ case txgbe_sfp_type_da_cu_core1:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_DA;
+ break;
+ case txgbe_sfp_type_sr:
+ case txgbe_sfp_type_lr:
+ case txgbe_sfp_type_srlr_core0:
+ case txgbe_sfp_type_srlr_core1:
+ case txgbe_sfp_type_1g_sx_core0:
+ case txgbe_sfp_type_1g_sx_core1:
+ case txgbe_sfp_type_1g_lx_core0:
+ case txgbe_sfp_type_1g_lx_core1:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_FIBRE;
+ break;
+ case txgbe_sfp_type_not_present:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_NONE;
+ break;
+ case txgbe_sfp_type_1g_cu_core0:
+ case txgbe_sfp_type_1g_cu_core1:
+ supported |= SUPPORTED_TP;
+ advertising |= ADVERTISED_TP;
+ cmd->base.port = PORT_TP;
+ break;
+ case txgbe_sfp_type_unknown:
+ default:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_OTHER;
+ break;
+ }
+ break;
+ case txgbe_phy_xaui:
+ supported |= SUPPORTED_TP;
+ advertising |= ADVERTISED_TP;
+ cmd->base.port = PORT_TP;
+ break;
+ case txgbe_phy_unknown:
+ case txgbe_phy_generic:
+ case txgbe_phy_sfp_unsupported:
+ default:
+ supported |= SUPPORTED_FIBRE;
+ advertising |= ADVERTISED_FIBRE;
+ cmd->base.port = PORT_OTHER;
+ break;
+ }
+
+ if (!in_interrupt()) {
+ TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+ } else {
+ /*
+ * this case is a special workaround for RHEL5 bonding
+ * that calls this routine from interrupt context
+ */
+ link_speed = adapter->link_speed;
+ link_up = adapter->link_up;
+ }
+
+ supported |= SUPPORTED_Pause;
+
+ switch (hw->fc.requested_mode) {
+ case txgbe_fc_full:
+ advertising |= ADVERTISED_Pause;
+ break;
+ case txgbe_fc_rx_pause:
+ advertising |= ADVERTISED_Pause |
+ ADVERTISED_Asym_Pause;
+ break;
+ case txgbe_fc_tx_pause:
+ advertising |= ADVERTISED_Asym_Pause;
+ break;
+ default:
+ advertising &= ~(ADVERTISED_Pause |
+ ADVERTISED_Asym_Pause);
+ }
+
+ if (link_up) {
+ switch (link_speed) {
+ case TXGBE_LINK_SPEED_10GB_FULL:
+ cmd->base.speed = SPEED_10000;
+ break;
+ case TXGBE_LINK_SPEED_1GB_FULL:
+ cmd->base.speed = SPEED_1000;
+ break;
+ case TXGBE_LINK_SPEED_100_FULL:
+ cmd->base.speed = SPEED_100;
+ break;
+ case TXGBE_LINK_SPEED_10_FULL:
+ cmd->base.speed = SPEED_10;
+ break;
+ default:
+ break;
+ }
+ cmd->base.duplex = DUPLEX_FULL;
+ } else {
+ cmd->base.speed = -1;
+ cmd->base.duplex = -1;
+ }
+
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+ supported);
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising,
+ advertising);
+ return 0;
+}
+
+static int txgbe_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings *cmd)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 advertised, old;
+ s32 err = 0;
+ u32 supported, advertising;
+ ethtool_convert_link_mode_to_legacy_u32(&supported,
+ cmd->link_modes.supported);
+ ethtool_convert_link_mode_to_legacy_u32(&advertising,
+ cmd->link_modes.advertising);
+
+ if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) {
+ adapter->backplane_an = cmd->base.autoneg ? 1 : 0;
+ } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+ adapter->an37 = cmd->base.autoneg ? 1 : 0;
+ }
+
+ if ((hw->phy.media_type == txgbe_media_type_copper) ||
+ (hw->phy.multispeed_fiber)) {
+ /*
+ * this function does not support duplex forcing, but can
+ * limit the advertising of the adapter to the specified speed
+ */
+ if (advertising & ~supported)
+ return -EINVAL;
+
+ /* only allow one speed at a time if no autoneg */
+ if (!cmd->base.autoneg && hw->phy.multispeed_fiber) {
+ if (advertising ==
+ (ADVERTISED_10000baseT_Full |
+ ADVERTISED_1000baseT_Full))
+ return -EINVAL;
+ }
+ old = hw->phy.autoneg_advertised;
+ advertised = 0;
+ if (advertising & ADVERTISED_10000baseT_Full)
+ advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+ if (advertising & ADVERTISED_1000baseT_Full)
+ advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+ if (advertising & ADVERTISED_100baseT_Full)
+ advertised |= TXGBE_LINK_SPEED_100_FULL;
+
+ if (advertising & ADVERTISED_10baseT_Full)
+ advertised |= TXGBE_LINK_SPEED_10_FULL;
+
+ if (old == advertised)
+ return err;
+ /* this sets the link speed and restarts auto-neg */
+ while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+ usleep_range(1000, 2000);
+
+ hw->mac.autotry_restart = true;
+ err = TCALL(hw, mac.ops.setup_link, advertised, true);
+ if (err) {
+ e_info(probe, "setup link failed with code %d\n", err);
+ TCALL(hw, mac.ops.setup_link, old, true);
+ }
+ if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)
+ TCALL(hw, mac.ops.flap_tx_laser);
+ clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+ } else if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4 ||
+ (hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+ if (!cmd->base.autoneg) {
+ if (advertising ==
+ (ADVERTISED_10000baseKR_Full |
+ ADVERTISED_1000baseKX_Full |
+ ADVERTISED_10000baseKX4_Full))
+ return -EINVAL;
+ } else {
+ err = txgbe_set_link_to_kr(hw, 1);
+ return err;
+ }
+ advertised = 0;
+ if (advertising & ADVERTISED_10000baseKR_Full) {
+ err = txgbe_set_link_to_kr(hw, 1);
+ advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+ return err;
+ } else if (advertising & ADVERTISED_10000baseKX4_Full) {
+ err = txgbe_set_link_to_kx4(hw, 1);
+ advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+ return err;
+ } else if (advertising & ADVERTISED_1000baseKX_Full) {
+ advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+ err = txgbe_set_link_to_kx(hw, TXGBE_LINK_SPEED_1GB_FULL, 0);
+ return err;
+ }
+ return err;
+ } else {
+ /* in this case we currently only support 10Gb/FULL */
+ u32 speed = cmd->base.speed;
+ if ((cmd->base.autoneg == AUTONEG_ENABLE) ||
+ (advertising != ADVERTISED_10000baseT_Full) ||
+ (speed + cmd->base.duplex != SPEED_10000 + DUPLEX_FULL))
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+static void txgbe_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (txgbe_device_supports_autoneg_fc(hw) &&
+ !hw->fc.disable_fc_autoneg)
+ pause->autoneg = 1;
+ else
+ pause->autoneg = 0;
+
+ if (hw->fc.current_mode == txgbe_fc_rx_pause) {
+ pause->rx_pause = 1;
+ } else if (hw->fc.current_mode == txgbe_fc_tx_pause) {
+ pause->tx_pause = 1;
+ } else if (hw->fc.current_mode == txgbe_fc_full) {
+ pause->rx_pause = 1;
+ pause->tx_pause = 1;
+ }
+}
+
+static int txgbe_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_fc_info fc = hw->fc;
+
+ /* some devices do not support autoneg of flow control */
+ if ((pause->autoneg == AUTONEG_ENABLE) &&
+ !txgbe_device_supports_autoneg_fc(hw))
+ return -EINVAL;
+
+ fc.disable_fc_autoneg = (pause->autoneg != AUTONEG_ENABLE);
+
+ if ((pause->rx_pause && pause->tx_pause) || pause->autoneg)
+ fc.requested_mode = txgbe_fc_full;
+ else if (pause->rx_pause)
+ fc.requested_mode = txgbe_fc_rx_pause;
+ else if (pause->tx_pause)
+ fc.requested_mode = txgbe_fc_tx_pause;
+ else
+ fc.requested_mode = txgbe_fc_none;
+
+ /* if the thing changed then we'll update and use new autoneg */
+ if (memcmp(&fc, &hw->fc, sizeof(struct txgbe_fc_info))) {
+ hw->fc = fc;
+ if (netif_running(netdev))
+ txgbe_reinit_locked(adapter);
+ else
+ txgbe_reset(adapter);
+ }
+
+ return 0;
+}
+
+static u32 txgbe_get_msglevel(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ return adapter->msg_enable;
+}
+
+static void txgbe_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ adapter->msg_enable = data;
+}
+
+#define TXGBE_REGS_LEN 4096
+static int txgbe_get_regs_len(struct net_device __always_unused *netdev)
+{
+ return TXGBE_REGS_LEN * sizeof(u32);
+}
+
+#define TXGBE_GET_STAT(_A_, _R_) (_A_->stats._R_)
+
+static void txgbe_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 *regs_buff = p;
+ u32 i;
+ u32 id = 0;
+
+ memset(p, 0, TXGBE_REGS_LEN * sizeof(u32));
+ regs_buff[TXGBE_REGS_LEN - 1] = 0x55555555;
+
+ regs->version = hw->revision_id << 16 |
+ hw->device_id;
+
+ /* Global Registers */
+ /* chip control */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PWR);//0
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_CTL);//1
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_PF_SM);//2
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST);//3
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_ST);//4
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_SWSM);//5
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_MIS_RST_ST);//6
+ /* pvt sensor */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_CTL);//7
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_EN);//8
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ST);//9
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_THRE);//10
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_DALARM_THRE);//11
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_INT_EN);//12
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TS_ALARM_ST);//13
+ /* Fmgr Register */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMD);//14
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_DATA);//15
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_STATUS);//16
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_USR_CMD);//17
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG0);//18
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_CMDCFG1);//19
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_STATUS);//20
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_SPI_ILDR_SWPTR);//21
+
+ /* Port Registers */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_CTL);//22
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_PORT_ST);//23
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_EX_VTYPE);//24
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN);//25
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_VXLAN_GPE);//26
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_GENEVE);//27
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TEREDO);//28
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TCP_TIME);//29
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_LED_CTL);//30
+ /* GPIO */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DR);//31
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_DDR);//32
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_CTL);//33
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTEN);//34
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTMASK);//35
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_GPIO_INTSTATUS);//36
+ /* I2C */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CON);//37
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TAR);//38
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DATA_CMD);//39
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_HCNT);//40
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SS_SCL_LCNT);//41
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_HCNT);//42
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SCL_LCNT);//43
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SCL_HCNT);//44
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_STAT);//45
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_INTR_MASK);//46
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RAW_INTR_STAT);//47
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RX_TL);//48
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_TL);//49
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_INTR);//50
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_UNDER);//51
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_OVER);//52
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_OVER);//53
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RD_REQ);//54
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_TX_ABRT);//55
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_RX_DONE);//56
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_ACTIVITY);//57
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_STOP_DET);//58
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_START_DET);//59
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_GEN_CALL);//60
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE);//61
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_STATUS);//62
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TXFLR);//63
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_RXFLR);//64
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_HOLD);//65
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_TX_ABRT_SOURCE);//66
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_SETUP);//67
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_ENABLE_STATUS);//68
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_FS_SPKLEN);//69
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_HS_SPKLEN);//70
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT);//71
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT);//72
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_CLR_SCL_STUCK_DET);//73
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_DEVICE_ID);//74
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_PARAM_1);//75
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_VERSION);//76
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_I2C_COMP_TYPE);//77
+ /* TX TPH */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_TDESC);//78
+ /* RX TPH */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RDESC);//79
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RHDR);//80
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_CFG_TPH_RPL);//81
+
+ /* TDMA */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_CTL);//82
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(0));//83
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VF_TE(1));//84
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PB_THRE(i));//85-92
+ }
+ for (i = 0; i < 4; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_LLQ(i));//93-96
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_L);//97
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_LB_H);//98
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_L);//99
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETYPE_AS_H);//100
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_L);//101
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MAC_AS_H);//102
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_L);//103
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_AS_H);//104
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_L);//105
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_TCP_FLG_H);//106
+ for (i = 0; i < 64; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VLAN_INS(i));//107-234
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_ETAG_INS(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CTL);//235
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_MMW);//236
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_PBWARB_CFG(i));//237-244
+ }
+ for (i = 0; i < 128; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_VM_CREDIT(i));//245-372
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_EOF);//373
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDM_FC_SOF);//374
+
+ /* RDMA */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CTL);//375
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(0));//376
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_VF_RE(1));//377
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_RSC_CTL);//378
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_ARB_CFG(i));//379-386
+ }
+ for (i = 0; i < 4; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_QDE(i));//387-394
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDM_PF_HIDE(i));
+ }
+
+ /* RDB */
+ /*flow control */
+ for (i = 0; i < 4; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCV(i));//395-398
+ }
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCL(i));//399-414
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCH(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCRT);//415
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RFCC);//416
+ /* receive packet buffer */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_CTL);//417
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_WRAP);//418
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_UP2TC);//419
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PB_SZ(i));//420-435
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_MPCNT(i));
+ }
+ /* lli interrupt */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_LLI_THRE);//436
+ /* ring assignment */
+ for (i = 0; i < 64; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_PL_CFG(i));//437-500
+ }
+ for (i = 0; i < 32; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSTBL(i));//501-532
+ }
+ for (i = 0; i < 10; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSSRK(i));//533-542
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RSS_TC);//543
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_RA_CTL);//544
+ for (i = 0; i < 128; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SA(i));//545-1184
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_DA(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_SDP(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL0(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_5T_CTL1(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_SYN_CLS);//1185
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_ETYPE_CLS(i));//1186-1193
+ }
+ /* fcoe redirection table */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_CTL);//1194
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FCRE_TBL(i));//1195-1202
+ }
+ /*flow director */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CTL);//1203
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HKEY);//1204
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SKEY);//1205
+ for (i = 0; i < 16; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX_CFG(i));//1206-1221
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FREE);//1222
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_LEN);//1223
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_USE_ST);//1224
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FAIL_ST);//1225
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MATCH);//1226
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_MISS);//1227
+ for (i = 0; i < 3; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6(i));//1228-1230
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA);//1231
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA);//1232
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_PORT);//1233
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_FLEX);//1234
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_HASH);//1235
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_CMD);//1236
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_DA4_MSK);//1237
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SA4_MSK);//1238
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_TCP_MSK);//1239
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_UDP_MSK);//1240
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_SCTP_MSK);//1241
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_IP6_MSK);//1242
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RDB_FDIR_OTHER_MSK);//1243
+
+ /* PSR */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_CTL);//1244
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_CTL);//1245
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_CTL);//1246
+ for (i = 0; i < 64; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VM_L2CTL(i));//1247-1310
+ }
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_ETYPE_SWC(i));//1311-1318
+ }
+ for (i = 0; i < 128; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MC_TBL(i));//1319-1702
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_UC_TBL(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_TBL(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_L);//1703
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_AD_H);//1704
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_L);//1705
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_VM_H);//1706
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MAC_SWC_IDX);//1707
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC);//1708
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_L);//1709
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_VM_H);//1710
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_VLAN_SWC_IDX);//1711
+ for (i = 0; i < 4; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_CTL(i));//1712-1731
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_L(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VLAN_H(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_L(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_MR_VM_H(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_CTL);//1732
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPL);//1733
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_STMPH);//1734
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRL);//1735
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_ATTRH);//1736
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_1588_MSGTYPE);//1737
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_CTL);//1738
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IPV);//1739
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);//1740
+ for (i = 0; i < 4; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP4TBL(i));//1741-1748
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_WKUP_IP6TBL(i));
+ }
+ for (i = 0; i < 16; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_L(i));//1749-1796
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_DW_H(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_MSK(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PSR_LAN_FLEX_CTL);//1797
+
+ /* TDB */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_RFCS);//1798
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PB_SZ(0));//1799
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_UP2TC);//1800
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CTL);//1801
+ for (i = 0; i < 8; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_PBRARB_CFG(i));//1802-1809
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TDB_MNG_TC);//1810
+
+ /* tsec */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_CTL);//1811
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_ST);//1812
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AF);//1813
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_BUF_AE);//1814
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_MIN_IFG);//1815
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_CTL);//1816
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPL);//1817
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_STMPH);//1818
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIML);//1819
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_SYSTIMH);//1820
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_INC);//1821
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJL);//1822
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_TSC_1588_ADJH);//1823
+
+ /* RSEC */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_CTL);//1824
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_RSC_ST);//1825
+
+ /* BAR register */
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IC);//1826
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_ICS);//1827
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IEN);//1828
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_GPIE);//1829
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(0));//1830
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IC(1));//1831
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(0));//1832
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ICS(1));//1833
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(0));//1834
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMS(1));//1835
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(0));//1836
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IMC(1));//1837
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_L);//1838
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ISB_ADDR_H);//1839
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITRSEL);//1840
+ for (i = 0; i < 64; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_ITR(i));//1841-1968
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_IVAR(i));
+ }
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_MISC_IVAR);//1969
+ for (i = 0; i < 128; i++) {
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAL(i));//1970-3249
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_BAH(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_WP(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_RP(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_RR_CFG(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAL(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_BAH(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_WP(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_RP(i));
+ regs_buff[id++] = TXGBE_R32_Q(hw, TXGBE_PX_TR_CFG(i));
+ }
+}
+
+static int txgbe_get_eeprom_len(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ return adapter->hw.eeprom.word_size * 2;
+}
+
+static int txgbe_get_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 *eeprom_buff;
+ int first_word, last_word, eeprom_len;
+ int ret_val = 0;
+ u16 i;
+
+ if (eeprom->len == 0)
+ return -EINVAL;
+
+ eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+ first_word = eeprom->offset >> 1;
+ last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+ eeprom_len = last_word - first_word + 1;
+
+ eeprom_buff = kmalloc(sizeof(u16) * eeprom_len, GFP_KERNEL);
+ if (!eeprom_buff)
+ return -ENOMEM;
+
+ ret_val = TCALL(hw, eeprom.ops.read_buffer, first_word, eeprom_len,
+ eeprom_buff);
+
+ /* Device's eeprom is always little-endian, word addressable */
+ for (i = 0; i < eeprom_len; i++)
+ le16_to_cpus(&eeprom_buff[i]);
+
+ memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 1), eeprom->len);
+ kfree(eeprom_buff);
+
+ return ret_val;
+}
+
+static int txgbe_set_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 *eeprom_buff;
+ void *ptr;
+ int max_len, first_word, last_word, ret_val = 0;
+ u16 i;
+
+ if (eeprom->len == 0)
+ return -EINVAL;
+
+ if (eeprom->magic != (hw->vendor_id | (hw->device_id << 16)))
+ return -EINVAL;
+
+ max_len = hw->eeprom.word_size * 2;
+
+ first_word = eeprom->offset >> 1;
+ last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+ eeprom_buff = kmalloc(max_len, GFP_KERNEL);
+ if (!eeprom_buff)
+ return -ENOMEM;
+
+ ptr = eeprom_buff;
+
+ if (eeprom->offset & 1) {
+ /*
+ * need read/modify/write of first changed EEPROM word
+ * only the second byte of the word is being modified
+ */
+ ret_val = TCALL(hw, eeprom.ops.read, first_word,
+ &eeprom_buff[0]);
+ if (ret_val)
+ goto err;
+
+ ptr++;
+ }
+ if (((eeprom->offset + eeprom->len) & 1) && (ret_val == 0)) {
+ /*
+ * need read/modify/write of last changed EEPROM word
+ * only the first byte of the word is being modified
+ */
+ ret_val = TCALL(hw, eeprom.ops.read, last_word,
+ &eeprom_buff[last_word - first_word]);
+ if (ret_val)
+ goto err;
+ }
+
+ /* Device's eeprom is always little-endian, word addressable */
+ for (i = 0; i < last_word - first_word + 1; i++)
+ le16_to_cpus(&eeprom_buff[i]);
+
+ memcpy(ptr, bytes, eeprom->len);
+
+ for (i = 0; i < last_word - first_word + 1; i++)
+ cpu_to_le16s(&eeprom_buff[i]);
+
+ ret_val = TCALL(hw, eeprom.ops.write_buffer, first_word,
+ last_word - first_word + 1,
+ eeprom_buff);
+
+ /* Update the checksum */
+ if (ret_val == 0)
+ TCALL(hw, eeprom.ops.update_checksum);
+
+err:
+ kfree(eeprom_buff);
+ return ret_val;
+}
+
+static void txgbe_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ strncpy(drvinfo->driver, txgbe_driver_name,
+ sizeof(drvinfo->driver) - 1);
+ strncpy(drvinfo->version, txgbe_driver_version,
+ sizeof(drvinfo->version) - 1);
+ strncpy(drvinfo->fw_version, adapter->eeprom_id,
+ sizeof(drvinfo->fw_version));
+ strncpy(drvinfo->bus_info, pci_name(adapter->pdev),
+ sizeof(drvinfo->bus_info) - 1);
+ if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) {
+ drvinfo->n_stats = TXGBE_STATS_LEN -
+ (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues)*
+ (sizeof(struct txgbe_queue_stats) / sizeof(u64))*2;
+ } else {
+ drvinfo->n_stats = TXGBE_STATS_LEN;
+ }
+ drvinfo->testinfo_len = TXGBE_TEST_LEN;
+ drvinfo->regdump_len = txgbe_get_regs_len(netdev);
+}
+
+static void txgbe_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ ring->rx_max_pending = TXGBE_MAX_RXD;
+ ring->tx_max_pending = TXGBE_MAX_TXD;
+ ring->rx_mini_max_pending = 0;
+ ring->rx_jumbo_max_pending = 0;
+ ring->rx_pending = adapter->rx_ring_count;
+ ring->tx_pending = adapter->tx_ring_count;
+ ring->rx_mini_pending = 0;
+ ring->rx_jumbo_pending = 0;
+}
+
+static int txgbe_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_ring *temp_ring;
+ int i, err = 0;
+ u32 new_rx_count, new_tx_count;
+
+ if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
+ return -EINVAL;
+
+ new_tx_count = clamp_t(u32, ring->tx_pending,
+ TXGBE_MIN_TXD, TXGBE_MAX_TXD);
+ new_tx_count = ALIGN(new_tx_count, TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE);
+
+ new_rx_count = clamp_t(u32, ring->rx_pending,
+ TXGBE_MIN_RXD, TXGBE_MAX_RXD);
+ new_rx_count = ALIGN(new_rx_count, TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE);
+
+ if ((new_tx_count == adapter->tx_ring_count) &&
+ (new_rx_count == adapter->rx_ring_count)) {
+ /* nothing to do */
+ return 0;
+ }
+
+ while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
+ usleep_range(1000, 2000);
+
+ if (!netif_running(adapter->netdev)) {
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ adapter->tx_ring[i]->count = new_tx_count;
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ adapter->rx_ring[i]->count = new_rx_count;
+ adapter->tx_ring_count = new_tx_count;
+ adapter->rx_ring_count = new_rx_count;
+ goto clear_reset;
+ }
+
+ /* allocate temporary buffer to store rings in */
+ i = max_t(int, adapter->num_tx_queues, adapter->num_rx_queues);
+ temp_ring = vmalloc(i * sizeof(struct txgbe_ring));
+
+ if (!temp_ring) {
+ err = -ENOMEM;
+ goto clear_reset;
+ }
+
+ txgbe_down(adapter);
+
+ /*
+ * Setup new Tx resources and free the old Tx resources in that order.
+ * We can then assign the new resources to the rings via a memcpy.
+ * The advantage to this approach is that we are guaranteed to still
+ * have resources even in the case of an allocation failure.
+ */
+ if (new_tx_count != adapter->tx_ring_count) {
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ memcpy(&temp_ring[i], adapter->tx_ring[i],
+ sizeof(struct txgbe_ring));
+
+ temp_ring[i].count = new_tx_count;
+ err = txgbe_setup_tx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ txgbe_free_tx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ txgbe_free_tx_resources(adapter->tx_ring[i]);
+
+ memcpy(adapter->tx_ring[i], &temp_ring[i],
+ sizeof(struct txgbe_ring));
+ }
+
+ adapter->tx_ring_count = new_tx_count;
+ }
+
+ /* Repeat the process for the Rx rings if needed */
+ if (new_rx_count != adapter->rx_ring_count) {
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ memcpy(&temp_ring[i], adapter->rx_ring[i],
+ sizeof(struct txgbe_ring));
+
+ temp_ring[i].count = new_rx_count;
+ err = txgbe_setup_rx_resources(&temp_ring[i]);
+ if (err) {
+ while (i) {
+ i--;
+ txgbe_free_rx_resources(&temp_ring[i]);
+ }
+ goto err_setup;
+ }
+ }
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ txgbe_free_rx_resources(adapter->rx_ring[i]);
+ memcpy(adapter->rx_ring[i], &temp_ring[i],
+ sizeof(struct txgbe_ring));
+ }
+
+ adapter->rx_ring_count = new_rx_count;
+ }
+
+err_setup:
+ txgbe_up(adapter);
+ vfree(temp_ring);
+clear_reset:
+ clear_bit(__TXGBE_RESETTING, &adapter->state);
+ return err;
+}
+
+static int txgbe_get_sset_count(struct net_device *netdev, int sset)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ switch (sset) {
+ case ETH_SS_TEST:
+ return TXGBE_TEST_LEN;
+ case ETH_SS_STATS:
+ if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) {
+ return TXGBE_STATS_LEN - (TXGBE_NUM_RX_QUEUES - adapter->num_tx_queues) *
+ (sizeof(struct txgbe_queue_stats) / sizeof(u64)) * 2;
+ } else {
+ return TXGBE_STATS_LEN;
+ }
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void txgbe_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct rtnl_link_stats64 temp;
+ const struct rtnl_link_stats64 *net_stats;
+
+ u64 *queue_stat;
+ int stat_count, k;
+ unsigned int start;
+ struct txgbe_ring *ring;
+ int i, j;
+ char *p;
+
+ txgbe_update_stats(adapter);
+ net_stats = dev_get_stats(netdev, &temp);
+
+ for (i = 0; i < TXGBE_NETDEV_STATS_LEN; i++) {
+ p = (char *)net_stats + txgbe_gstrings_net_stats[i].stat_offset;
+ data[i] = (txgbe_gstrings_net_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (j = 0; j < TXGBE_GLOBAL_STATS_LEN; j++, i++) {
+ p = (char *)adapter + txgbe_gstrings_stats[j].stat_offset;
+ data[i] = (txgbe_gstrings_stats[j].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (j = 0; j < adapter->num_tx_queues; j++) {
+ ring = adapter->tx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+#ifdef BP_EXTENDED_STATS
+ data[i++] = 0;
+ data[i++] = 0;
+ data[i++] = 0;
+#endif
+ continue;
+ }
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i+1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ i += 2;
+ }
+ for (j = 0; j < adapter->num_rx_queues; j++) {
+ ring = adapter->rx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+ continue;
+ }
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i+1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ i += 2;
+ }
+ for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) {
+ data[i++] = adapter->stats.pxontxc[j];
+ data[i++] = adapter->stats.pxofftxc[j];
+ }
+ for (j = 0; j < TXGBE_MAX_PACKET_BUFFERS; j++) {
+ data[i++] = adapter->stats.pxonrxc[j];
+ data[i++] = adapter->stats.pxoffrxc[j];
+ }
+
+ stat_count = sizeof(struct vf_stats) / sizeof(u64);
+ for (j = 0; j < adapter->num_vfs; j++) {
+ queue_stat = (u64 *)&adapter->vfinfo[j].vfstats;
+ for (k = 0; k < stat_count; k++)
+ data[i + k] = queue_stat[k];
+ queue_stat = (u64 *)&adapter->vfinfo[j].saved_rst_vfstats;
+ for (k = 0; k < stat_count; k++)
+ data[i + k] += queue_stat[k];
+ i += k;
+ }
+}
+
+static void txgbe_get_strings(struct net_device *netdev, u32 stringset,
+ u8 *data)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ char *p = (char *)data;
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *txgbe_gstrings_test,
+ TXGBE_TEST_LEN * ETH_GSTRING_LEN);
+ break;
+ case ETH_SS_STATS:
+ for (i = 0; i < TXGBE_NETDEV_STATS_LEN; i++) {
+ memcpy(p, txgbe_gstrings_net_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < TXGBE_GLOBAL_STATS_LEN; i++) {
+ memcpy(p, txgbe_gstrings_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ sprintf(p, "tx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ sprintf(p, "rx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) {
+ sprintf(p, "tx_pb_%u_pxon", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_pb_%u_pxoff", i);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < TXGBE_MAX_PACKET_BUFFERS; i++) {
+ sprintf(p, "rx_pb_%u_pxon", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_pb_%u_pxoff", i);
+ p += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < adapter->num_vfs; i++) {
+ sprintf(p, "VF %d Rx Packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "VF %d Rx Bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "VF %d Tx Packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "VF %d Tx Bytes", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "VF %d MC Packets", i);
+ p += ETH_GSTRING_LEN;
+ }
+ /* BUG_ON(p - data != TXGBE_STATS_LEN * ETH_GSTRING_LEN); */
+ break;
+ }
+}
+
+static int txgbe_link_test(struct txgbe_adapter *adapter, u64 *data)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ bool link_up;
+ u32 link_speed = 0;
+
+ if (TXGBE_REMOVED(hw->hw_addr)) {
+ *data = 1;
+ return 1;
+ }
+ *data = 0;
+ TCALL(hw, mac.ops.check_link, &link_speed, &link_up, true);
+ if (link_up)
+ return *data;
+ else
+ *data = 1;
+ return *data;
+}
+
+/* ethtool register test data */
+struct txgbe_reg_test {
+ u32 reg;
+ u8 array_len;
+ u8 test_type;
+ u32 mask;
+ u32 write;
+};
+
+/* In the hardware, registers are laid out either singly, in arrays
+ * spaced 0x40 bytes apart, or in contiguous tables. We assume
+ * most tests take place on arrays or single registers (handled
+ * as a single-element array) and special-case the tables.
+ * Table tests are always pattern tests.
+ *
+ * We also make provision for some required setup steps by specifying
+ * registers to be written without any read-back testing.
+ */
+
+#define PATTERN_TEST 1
+#define SET_READ_TEST 2
+#define WRITE_NO_TEST 3
+#define TABLE32_TEST 4
+#define TABLE64_TEST_LO 5
+#define TABLE64_TEST_HI 6
+
+/* default sapphire register test */
+static struct txgbe_reg_test reg_test_sapphire[] = {
+ { TXGBE_RDB_RFCL(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+ { TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+ { TXGBE_PSR_VLAN_CTL, 1, PATTERN_TEST, 0x00000000, 0x00000000 },
+ { TXGBE_PX_RR_BAL(0), 4, PATTERN_TEST, 0xFFFFFF80, 0xFFFFFF80 },
+ { TXGBE_PX_RR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+ { TXGBE_PX_RR_CFG(0), 4, WRITE_NO_TEST, 0, TXGBE_PX_RR_CFG_RR_EN },
+ { TXGBE_RDB_RFCH(0), 1, PATTERN_TEST, 0x8007FFF0, 0x8007FFF0 },
+ { TXGBE_RDB_RFCV(0), 1, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+ { TXGBE_PX_TR_BAL(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+ { TXGBE_PX_TR_BAH(0), 4, PATTERN_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+ { TXGBE_RDB_PB_CTL, 1, SET_READ_TEST, 0x00000001, 0x00000001 },
+ { TXGBE_PSR_MC_TBL(0), 128, TABLE32_TEST, 0xFFFFFFFF, 0xFFFFFFFF },
+ { .reg = 0 }
+};
+
+static bool reg_pattern_test(struct txgbe_adapter *adapter, u64 *data, int reg,
+ u32 mask, u32 write)
+{
+ u32 pat, val, before;
+ static const u32 test_pattern[] = {
+ 0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF
+ };
+
+ if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+ *data = 1;
+ return true;
+ }
+ for (pat = 0; pat < ARRAY_SIZE(test_pattern); pat++) {
+ before = rd32(&adapter->hw, reg);
+ wr32(&adapter->hw, reg, test_pattern[pat] & write);
+ val = rd32(&adapter->hw, reg);
+ if (val != (test_pattern[pat] & write & mask)) {
+ e_err(drv,
+ "pattern test reg %04X failed: got 0x%08X "
+ "expected 0x%08X\n",
+ reg, val, test_pattern[pat] & write & mask);
+ *data = reg;
+ wr32(&adapter->hw, reg, before);
+ return true;
+ }
+ wr32(&adapter->hw, reg, before);
+ }
+ return false;
+}
+
+static bool reg_set_and_check(struct txgbe_adapter *adapter, u64 *data, int reg,
+ u32 mask, u32 write)
+{
+ u32 val, before;
+
+ if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+ *data = 1;
+ return true;
+ }
+ before = rd32(&adapter->hw, reg);
+ wr32(&adapter->hw, reg, write & mask);
+ val = rd32(&adapter->hw, reg);
+ if ((write & mask) != (val & mask)) {
+ e_err(drv,
+ "set/check reg %04X test failed: got 0x%08X expected"
+ "0x%08X\n",
+ reg, (val & mask), (write & mask));
+ *data = reg;
+ wr32(&adapter->hw, reg, before);
+ return true;
+ }
+ wr32(&adapter->hw, reg, before);
+ return false;
+}
+
+static bool txgbe_reg_test(struct txgbe_adapter *adapter, u64 *data)
+{
+ struct txgbe_reg_test *test;
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 i;
+
+ if (TXGBE_REMOVED(hw->hw_addr)) {
+ e_err(drv, "Adapter removed - register test blocked\n");
+ *data = 1;
+ return true;
+ }
+
+ test = reg_test_sapphire;
+
+ /*
+ * Perform the remainder of the register test, looping through
+ * the test table until we either fail or reach the null entry.
+ */
+ while (test->reg) {
+ for (i = 0; i < test->array_len; i++) {
+ bool b = false;
+
+ switch (test->test_type) {
+ case PATTERN_TEST:
+ b = reg_pattern_test(adapter, data,
+ test->reg + (i * 0x40),
+ test->mask,
+ test->write);
+ break;
+ case SET_READ_TEST:
+ b = reg_set_and_check(adapter, data,
+ test->reg + (i * 0x40),
+ test->mask,
+ test->write);
+ break;
+ case WRITE_NO_TEST:
+ wr32(hw, test->reg + (i * 0x40),
+ test->write);
+ break;
+ case TABLE32_TEST:
+ b = reg_pattern_test(adapter, data,
+ test->reg + (i * 4),
+ test->mask,
+ test->write);
+ break;
+ case TABLE64_TEST_LO:
+ b = reg_pattern_test(adapter, data,
+ test->reg + (i * 8),
+ test->mask,
+ test->write);
+ break;
+ case TABLE64_TEST_HI:
+ b = reg_pattern_test(adapter, data,
+ (test->reg + 4) + (i * 8),
+ test->mask,
+ test->write);
+ break;
+ }
+ if (b)
+ return true;
+ }
+ test++;
+ }
+
+ *data = 0;
+ return false;
+}
+
+static bool txgbe_eeprom_test(struct txgbe_adapter *adapter, u64 *data)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) {
+ *data = 1;
+ return true;
+ } else {
+ *data = 0;
+ return false;
+ }
+}
+
+static irqreturn_t txgbe_test_intr(int __always_unused irq, void *data)
+{
+ struct net_device *netdev = (struct net_device *) data;
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ u64 icr;
+
+ /* get misc interrupt, as cannot get ring interrupt status */
+ icr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC1);
+ icr <<= 32;
+ icr |= txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
+
+ adapter->test_icr = icr;
+
+ return IRQ_HANDLED;
+}
+
+static int txgbe_intr_test(struct txgbe_adapter *adapter, u64 *data)
+{
+ struct net_device *netdev = adapter->netdev;
+ u64 mask;
+ u32 i = 0, shared_int = true;
+ u32 irq = adapter->pdev->irq;
+
+ if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+ *data = 1;
+ return -1;
+ }
+ *data = 0;
+
+ /* Hook up test interrupt handler just for this test */
+ if (adapter->msix_entries) {
+ /* NOTE: we don't test MSI-X interrupts here, yet */
+ return 0;
+ } else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) {
+ shared_int = false;
+ if (request_irq(irq, &txgbe_test_intr, 0, netdev->name,
+ netdev)) {
+ *data = 1;
+ return -1;
+ }
+ } else if (!request_irq(irq, &txgbe_test_intr, IRQF_PROBE_SHARED,
+ netdev->name, netdev)) {
+ shared_int = false;
+ } else if (request_irq(irq, &txgbe_test_intr, IRQF_SHARED,
+ netdev->name, netdev)) {
+ *data = 1;
+ return -1;
+ }
+ e_info(hw, "testing %s interrupt\n",
+ (shared_int ? "shared" : "unshared"));
+
+ /* Disable all the interrupts */
+ txgbe_irq_disable(adapter);
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+ usleep_range(10000, 20000);
+
+ /* Test each interrupt */
+ for (; i < 1; i++) {
+ /* Interrupt to test */
+ mask = 1ULL << i;
+
+ if (!shared_int) {
+ /*
+ * Disable the interrupts to be reported in
+ * the cause register and then force the same
+ * interrupt and see if one gets posted. If
+ * an interrupt was posted to the bus, the
+ * test failed.
+ */
+ adapter->test_icr = 0;
+ txgbe_intr_disable(&adapter->hw, ~mask);
+ txgbe_intr_trigger(&adapter->hw, mask);
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+ usleep_range(10000, 20000);
+
+ if (adapter->test_icr & mask) {
+ *data = 3;
+ break;
+ }
+ }
+
+ /*
+ * Enable the interrupt to be reported in the cause
+ * register and then force the same interrupt and see
+ * if one gets posted. If an interrupt was not posted
+ * to the bus, the test failed.
+ */
+ adapter->test_icr = 0;
+ txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+ txgbe_intr_trigger(&adapter->hw, mask);
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+ usleep_range(10000, 20000);
+
+ if (!(adapter->test_icr & mask)) {
+ *data = 4;
+ break;
+ }
+ }
+
+ /* Disable all the interrupts */
+ txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+ usleep_range(10000, 20000);
+
+ /* Unhook test interrupt handler */
+ free_irq(irq, netdev);
+
+ return *data;
+}
+
+static void txgbe_free_desc_rings(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+ struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ /* shut down the DMA engines now so they can be reinitialized later */
+
+ /* first Rx */
+ TCALL(hw, mac.ops.disable_rx);
+ txgbe_disable_rx_queue(adapter, rx_ring);
+
+ /* now Tx */
+ wr32(hw, TXGBE_PX_TR_CFG(tx_ring->reg_idx), 0);
+
+ wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
+
+ txgbe_reset(adapter);
+
+ txgbe_free_tx_resources(&adapter->test_tx_ring);
+ txgbe_free_rx_resources(&adapter->test_rx_ring);
+}
+
+static int txgbe_setup_desc_rings(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+ struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+ struct txgbe_hw *hw = &adapter->hw;
+ int ret_val;
+ int err;
+
+ TCALL(hw, mac.ops.setup_rxpba, 0, 0, PBA_STRATEGY_EQUAL);
+
+ /* Setup Tx descriptor ring and Tx buffers */
+ tx_ring->count = TXGBE_DEFAULT_TXD;
+ tx_ring->queue_index = 0;
+ tx_ring->dev = pci_dev_to_dev(adapter->pdev);
+ tx_ring->netdev = adapter->netdev;
+ tx_ring->reg_idx = adapter->tx_ring[0]->reg_idx;
+
+ err = txgbe_setup_tx_resources(tx_ring);
+ if (err)
+ return 1;
+
+ wr32m(&adapter->hw, TXGBE_TDM_CTL,
+ TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE);
+
+ txgbe_configure_tx_ring(adapter, tx_ring);
+
+ /* enable mac transmitter */
+ wr32m(hw, TXGBE_MAC_TX_CFG,
+ TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_MASK,
+ TXGBE_MAC_TX_CFG_TE | TXGBE_MAC_TX_CFG_SPEED_10G);
+
+ /* Setup Rx Descriptor ring and Rx buffers */
+ rx_ring->count = TXGBE_DEFAULT_RXD;
+ rx_ring->queue_index = 0;
+ rx_ring->dev = pci_dev_to_dev(adapter->pdev);
+ rx_ring->netdev = adapter->netdev;
+ rx_ring->reg_idx = adapter->rx_ring[0]->reg_idx;
+
+ err = txgbe_setup_rx_resources(rx_ring);
+ if (err) {
+ ret_val = 4;
+ goto err_nomem;
+ }
+
+ TCALL(hw, mac.ops.disable_rx);
+
+ txgbe_configure_rx_ring(adapter, rx_ring);
+
+ TCALL(hw, mac.ops.enable_rx);
+
+ return 0;
+
+err_nomem:
+ txgbe_free_desc_rings(adapter);
+ return ret_val;
+}
+
+static int txgbe_setup_config(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 reg_data;
+
+ /* Setup traffic loopback */
+ reg_data = rd32(hw, TXGBE_PSR_CTL);
+ reg_data |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_UPE |
+ TXGBE_PSR_CTL_MPE | TXGBE_PSR_CTL_TPE;
+ wr32(hw, TXGBE_PSR_CTL, reg_data);
+
+ wr32(hw, TXGBE_RSC_CTL,
+ (rd32(hw, TXGBE_RSC_CTL) |
+ TXGBE_RSC_CTL_SAVE_MAC_ERR) & ~TXGBE_RSC_CTL_SECRX_DIS);
+
+ wr32(hw, TXGBE_RSC_LSEC_CTL, 0x4);
+
+ wr32(hw, TXGBE_PSR_VLAN_CTL,
+ rd32(hw, TXGBE_PSR_VLAN_CTL) &
+ ~TXGBE_PSR_VLAN_CTL_VFE);
+
+ wr32m(&adapter->hw, TXGBE_MAC_RX_CFG,
+ TXGBE_MAC_RX_CFG_LM, ~TXGBE_MAC_RX_CFG_LM);
+ wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+ TXGBE_CFG_PORT_CTL_FORCE_LKUP, ~TXGBE_CFG_PORT_CTL_FORCE_LKUP);
+
+
+ TXGBE_WRITE_FLUSH(hw);
+ usleep_range(10000, 20000);
+
+ return 0;
+}
+
+static int txgbe_setup_phy_loopback_test(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 value;
+ /* setup phy loopback */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0);
+ value |= TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 |
+ TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1;
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value);
+
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+ value | TXGBE_SR_PMA_MMD_CTL1_LB_EN);
+ return 0;
+}
+
+static void txgbe_phy_loopback_cleanup(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 value;
+
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_MISC_CTL0);
+ value &= ~(TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 |
+ TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1);
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, value);
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+ value & ~TXGBE_SR_PMA_MMD_CTL1_LB_EN);
+}
+
+
+static void txgbe_create_lbtest_frame(struct sk_buff *skb,
+ unsigned int frame_size)
+{
+ memset(skb->data, 0xFF, frame_size);
+ frame_size >>= 1;
+ memset(&skb->data[frame_size], 0xAA, frame_size / 2 - 1);
+ memset(&skb->data[frame_size + 10], 0xBE, 1);
+ memset(&skb->data[frame_size + 12], 0xAF, 1);
+}
+
+static bool txgbe_check_lbtest_frame(struct txgbe_rx_buffer *rx_buffer,
+ unsigned int frame_size)
+{
+ unsigned char *data;
+ bool match = true;
+
+ frame_size >>= 1;
+ data = kmap(rx_buffer->page) + rx_buffer->page_offset;
+
+ if (data[3] != 0xFF ||
+ data[frame_size + 10] != 0xBE ||
+ data[frame_size + 12] != 0xAF)
+ match = false;
+
+ kunmap(rx_buffer->page);
+ return match;
+}
+
+static u16 txgbe_clean_test_rings(struct txgbe_ring *rx_ring,
+ struct txgbe_ring *tx_ring,
+ unsigned int size)
+{
+ union txgbe_rx_desc *rx_desc;
+ struct txgbe_rx_buffer *rx_buffer;
+ struct txgbe_tx_buffer *tx_buffer;
+ const int bufsz = txgbe_rx_bufsz(rx_ring);
+ u16 rx_ntc, tx_ntc, count = 0;
+
+ /* initialize next to clean and descriptor values */
+ rx_ntc = rx_ring->next_to_clean;
+ tx_ntc = tx_ring->next_to_clean;
+ rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc);
+
+ while (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD)) {
+ /* unmap buffer on Tx side */
+ tx_buffer = &tx_ring->tx_buffer_info[tx_ntc];
+ txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer);
+
+ /* check Rx buffer */
+ rx_buffer = &rx_ring->rx_buffer_info[rx_ntc];
+
+ /* sync Rx buffer for CPU read */
+ dma_sync_single_for_cpu(rx_ring->dev,
+ rx_buffer->page_dma,
+ bufsz,
+ DMA_FROM_DEVICE);
+
+ /* verify contents of skb */
+ if (txgbe_check_lbtest_frame(rx_buffer, size))
+ count++;
+
+ /* sync Rx buffer for device write */
+ dma_sync_single_for_device(rx_ring->dev,
+ rx_buffer->page_dma,
+ bufsz,
+ DMA_FROM_DEVICE);
+
+ /* increment Rx/Tx next to clean counters */
+ rx_ntc++;
+ if (rx_ntc == rx_ring->count)
+ rx_ntc = 0;
+ tx_ntc++;
+ if (tx_ntc == tx_ring->count)
+ tx_ntc = 0;
+
+ /* fetch next descriptor */
+ rx_desc = TXGBE_RX_DESC(rx_ring, rx_ntc);
+ }
+
+ /* re-map buffers to ring, store next to clean values */
+ txgbe_alloc_rx_buffers(rx_ring, count);
+ rx_ring->next_to_clean = rx_ntc;
+ tx_ring->next_to_clean = tx_ntc;
+
+ return count;
+}
+
+static int txgbe_run_loopback_test(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring *tx_ring = &adapter->test_tx_ring;
+ struct txgbe_ring *rx_ring = &adapter->test_rx_ring;
+ int i, j, lc, good_cnt, ret_val = 0;
+ unsigned int size = 1024;
+ netdev_tx_t tx_ret_val;
+ struct sk_buff *skb;
+ u32 flags_orig = adapter->flags;
+
+
+ /* DCB can modify the frames on Tx */
+ adapter->flags &= ~TXGBE_FLAG_DCB_ENABLED;
+
+ /* allocate test skb */
+ skb = alloc_skb(size, GFP_KERNEL);
+ if (!skb)
+ return 11;
+
+ /* place data into test skb */
+ txgbe_create_lbtest_frame(skb, size);
+ skb_put(skb, size);
+
+ /*
+ * Calculate the loop count based on the largest descriptor ring
+ * The idea is to wrap the largest ring a number of times using 64
+ * send/receive pairs during each loop
+ */
+
+ if (rx_ring->count <= tx_ring->count)
+ lc = ((tx_ring->count / 64) * 2) + 1;
+ else
+ lc = ((rx_ring->count / 64) * 2) + 1;
+
+ for (j = 0; j <= lc; j++) {
+ /* reset count of good packets */
+ good_cnt = 0;
+
+ /* place 64 packets on the transmit queue*/
+ for (i = 0; i < 64; i++) {
+ skb_get(skb);
+ tx_ret_val = txgbe_xmit_frame_ring(skb,
+ adapter,
+ tx_ring);
+ if (tx_ret_val == NETDEV_TX_OK)
+ good_cnt++;
+ }
+
+ if (good_cnt != 64) {
+ ret_val = 12;
+ break;
+ }
+
+ /* allow 200 milliseconds for packets to go from Tx to Rx */
+ msleep(200);
+
+ good_cnt = txgbe_clean_test_rings(rx_ring, tx_ring, size);
+ if (j == 0)
+ continue;
+ else if (good_cnt != 64) {
+ ret_val = 13;
+ break;
+ }
+ }
+
+ /* free the original skb */
+ kfree_skb(skb);
+ adapter->flags = flags_orig;
+
+ return ret_val;
+}
+
+static int txgbe_loopback_test(struct txgbe_adapter *adapter, u64 *data)
+{
+ *data = txgbe_setup_desc_rings(adapter);
+ if (*data)
+ goto out;
+
+ *data = txgbe_setup_config(adapter);
+ if (*data)
+ goto err_loopback;
+
+ *data = txgbe_setup_phy_loopback_test(adapter);
+ if (*data)
+ goto err_loopback;
+ *data = txgbe_run_loopback_test(adapter);
+ if (*data)
+ e_info(hw, "phy loopback testing failed\n");
+ txgbe_phy_loopback_cleanup(adapter);
+
+err_loopback:
+ txgbe_free_desc_rings(adapter);
+out:
+ return *data;
+}
+
+static void txgbe_diag_test(struct net_device *netdev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ bool if_running = netif_running(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (TXGBE_REMOVED(hw->hw_addr)) {
+ e_err(hw, "Adapter removed - test blocked\n");
+ data[0] = 1;
+ data[1] = 1;
+ data[2] = 1;
+ data[3] = 1;
+ data[4] = 1;
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ return;
+ }
+
+ set_bit(__TXGBE_TESTING, &adapter->state);
+ if (eth_test->flags == ETH_TEST_FL_OFFLINE) {
+ if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) {
+ int i;
+ for (i = 0; i < adapter->num_vfs; i++) {
+ if (adapter->vfinfo[i].clear_to_send) {
+ e_warn(drv, "Please take active VFS "
+ "offline and restart the "
+ "adapter before running NIC "
+ "diagnostics\n");
+ data[0] = 1;
+ data[1] = 1;
+ data[2] = 1;
+ data[3] = 1;
+ data[4] = 1;
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ clear_bit(__TXGBE_TESTING,
+ &adapter->state);
+ goto skip_ol_tests;
+ }
+ }
+ }
+
+ /* Offline tests */
+ e_info(hw, "offline testing starting\n");
+
+ /* Link test performed before hardware reset so autoneg doesn't
+ * interfere with test result */
+ if (txgbe_link_test(adapter, &data[4]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ if (if_running)
+ /* indicate we're in test mode */
+ txgbe_close(netdev);
+ else
+ txgbe_reset(adapter);
+
+ e_info(hw, "register testing starting\n");
+ if (txgbe_reg_test(adapter, &data[0]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ txgbe_reset(adapter);
+ e_info(hw, "eeprom testing starting\n");
+ if (txgbe_eeprom_test(adapter, &data[1]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ txgbe_reset(adapter);
+ e_info(hw, "interrupt testing starting\n");
+ if (txgbe_intr_test(adapter, &data[2]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+ ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+ /* If SRIOV or VMDq is enabled then skip MAC
+ * loopback diagnostic. */
+ if (adapter->flags & (TXGBE_FLAG_SRIOV_ENABLED |
+ TXGBE_FLAG_VMDQ_ENABLED)) {
+ e_info(hw, "skip MAC loopback diagnostic in VT mode\n");
+ data[3] = 0;
+ goto skip_loopback;
+ }
+
+ txgbe_reset(adapter);
+ e_info(hw, "loopback testing starting\n");
+ if (txgbe_loopback_test(adapter, &data[3]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ }
+
+ data[3] = 0;
+skip_loopback:
+ txgbe_reset(adapter);
+
+ /* clear testing bit and return adapter to previous state */
+ clear_bit(__TXGBE_TESTING, &adapter->state);
+ if (if_running)
+ txgbe_open(netdev);
+ else
+ TCALL(hw, mac.ops.disable_tx_laser);
+ } else {
+ e_info(hw, "online testing starting\n");
+
+ /* Online tests */
+ if (txgbe_link_test(adapter, &data[4]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ /* Offline tests aren't run; pass by default */
+ data[0] = 0;
+ data[1] = 0;
+ data[2] = 0;
+ data[3] = 0;
+
+ clear_bit(__TXGBE_TESTING, &adapter->state);
+ }
+
+skip_ol_tests:
+ msleep_interruptible(4 * 1000);
+}
+
+
+static int txgbe_wol_exclusion(struct txgbe_adapter *adapter,
+ struct ethtool_wolinfo *wol)
+{
+ int retval = 0;
+
+ /* WOL not supported for all devices */
+ if (!txgbe_wol_supported(adapter)) {
+ retval = 1;
+ wol->supported = 0;
+ }
+
+ return retval;
+}
+
+static void txgbe_get_wol(struct net_device *netdev,
+ struct ethtool_wolinfo *wol)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ wol->supported = WAKE_UCAST | WAKE_MCAST |
+ WAKE_BCAST | WAKE_MAGIC;
+ wol->wolopts = 0;
+
+ if (txgbe_wol_exclusion(adapter, wol) ||
+ !device_can_wakeup(pci_dev_to_dev(adapter->pdev)))
+ return;
+ if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP)
+ return;
+
+ if (adapter->wol & TXGBE_PSR_WKUP_CTL_EX)
+ wol->wolopts |= WAKE_UCAST;
+ if (adapter->wol & TXGBE_PSR_WKUP_CTL_MC)
+ wol->wolopts |= WAKE_MCAST;
+ if (adapter->wol & TXGBE_PSR_WKUP_CTL_BC)
+ wol->wolopts |= WAKE_BCAST;
+ if (adapter->wol & TXGBE_PSR_WKUP_CTL_MAG)
+ wol->wolopts |= WAKE_MAGIC;
+}
+
+static int txgbe_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE))
+ return -EOPNOTSUPP;
+
+ if (txgbe_wol_exclusion(adapter, wol))
+ return wol->wolopts ? -EOPNOTSUPP : 0;
+ if ((hw->subsystem_device_id & TXGBE_WOL_MASK) != TXGBE_WOL_SUP)
+ return -EOPNOTSUPP;
+
+ adapter->wol = 0;
+
+ if (wol->wolopts & WAKE_UCAST)
+ adapter->wol |= TXGBE_PSR_WKUP_CTL_EX;
+ if (wol->wolopts & WAKE_MCAST)
+ adapter->wol |= TXGBE_PSR_WKUP_CTL_MC;
+ if (wol->wolopts & WAKE_BCAST)
+ adapter->wol |= TXGBE_PSR_WKUP_CTL_BC;
+ if (wol->wolopts & WAKE_MAGIC)
+ adapter->wol |= TXGBE_PSR_WKUP_CTL_MAG;
+
+ hw->wol_enabled = !!(adapter->wol);
+ wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol);
+
+ device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol);
+
+ return 0;
+}
+
+static int txgbe_nway_reset(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ if (netif_running(netdev))
+ txgbe_reinit_locked(adapter);
+
+ return 0;
+}
+
+static int txgbe_set_phys_id(struct net_device *netdev,
+ enum ethtool_phys_id_state state)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ switch (state) {
+ case ETHTOOL_ID_ACTIVE:
+ adapter->led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+ return 2;
+
+ case ETHTOOL_ID_ON:
+ TCALL(hw, mac.ops.led_on, TXGBE_LED_LINK_UP);
+ break;
+
+ case ETHTOOL_ID_OFF:
+ TCALL(hw, mac.ops.led_off, TXGBE_LED_LINK_UP);
+ break;
+
+ case ETHTOOL_ID_INACTIVE:
+ /* Restore LED settings */
+ wr32(&adapter->hw, TXGBE_CFG_LED_CTL,
+ adapter->led_reg);
+ break;
+ }
+
+ return 0;
+}
+
+static int txgbe_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ ec->tx_max_coalesced_frames_irq = adapter->tx_work_limit;
+ /* only valid if in constant ITR mode */
+ if (adapter->rx_itr_setting <= 1)
+ ec->rx_coalesce_usecs = adapter->rx_itr_setting;
+ else
+ ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2;
+
+ /* if in mixed tx/rx queues per vector mode, report only rx settings */
+ if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
+ return 0;
+
+ /* only valid if in constant ITR mode */
+ if (adapter->tx_itr_setting <= 1)
+ ec->tx_coalesce_usecs = adapter->tx_itr_setting;
+ else
+ ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2;
+
+ return 0;
+}
+
+/*
+ * this function must be called before setting the new value of
+ * rx_itr_setting
+ */
+static bool txgbe_update_rsc(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+
+ /* nothing to do if LRO or RSC are not enabled */
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) ||
+ !(netdev->features & NETIF_F_LRO))
+ return false;
+
+ /* check the feature flag value and enable RSC if necessary */
+ if (adapter->rx_itr_setting == 1 ||
+ adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) {
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) {
+ adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+ e_info(probe, "rx-usecs value high enough "
+ "to re-enable RSC\n");
+ return true;
+ }
+ /* if interrupt rate is too high then disable RSC */
+ } else if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) {
+ adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED;
+ e_info(probe, "rx-usecs set too low, disabling RSC\n");
+ return true;
+ }
+ return false;
+}
+
+static int txgbe_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_q_vector *q_vector;
+ int i;
+ u16 tx_itr_param, rx_itr_param;
+ u16 tx_itr_prev;
+ bool need_reset = false;
+
+ if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count) {
+ /* reject Tx specific changes in case of mixed RxTx vectors */
+ if (ec->tx_coalesce_usecs)
+ return -EINVAL;
+ tx_itr_prev = adapter->rx_itr_setting;
+ } else {
+ tx_itr_prev = adapter->tx_itr_setting;
+ }
+
+ if (ec->tx_max_coalesced_frames_irq)
+ adapter->tx_work_limit = ec->tx_max_coalesced_frames_irq;
+
+ if ((ec->rx_coalesce_usecs > (TXGBE_MAX_EITR >> 2)) ||
+ (ec->tx_coalesce_usecs > (TXGBE_MAX_EITR >> 2)))
+ return -EINVAL;
+
+ if (ec->rx_coalesce_usecs > 1)
+ adapter->rx_itr_setting = ec->rx_coalesce_usecs << 2;
+ else
+ adapter->rx_itr_setting = ec->rx_coalesce_usecs;
+
+ if (adapter->rx_itr_setting == 1)
+ rx_itr_param = TXGBE_20K_ITR;
+ else
+ rx_itr_param = adapter->rx_itr_setting;
+
+ if (ec->tx_coalesce_usecs > 1)
+ adapter->tx_itr_setting = ec->tx_coalesce_usecs << 2;
+ else
+ adapter->tx_itr_setting = ec->tx_coalesce_usecs;
+
+ if (adapter->tx_itr_setting == 1)
+ tx_itr_param = TXGBE_12K_ITR;
+ else
+ tx_itr_param = adapter->tx_itr_setting;
+
+ /* mixed Rx/Tx */
+ if (adapter->q_vector[0]->tx.count && adapter->q_vector[0]->rx.count)
+ adapter->tx_itr_setting = adapter->rx_itr_setting;
+
+ /* detect ITR changes that require update of TXDCTL.WTHRESH */
+ if ((adapter->tx_itr_setting != 1) &&
+ (adapter->tx_itr_setting < TXGBE_100K_ITR)) {
+ if ((tx_itr_prev == 1) ||
+ (tx_itr_prev >= TXGBE_100K_ITR))
+ need_reset = true;
+ } else {
+ if ((tx_itr_prev != 1) &&
+ (tx_itr_prev < TXGBE_100K_ITR))
+ need_reset = true;
+ }
+
+ /* check the old value and enable RSC if necessary */
+ need_reset |= txgbe_update_rsc(adapter);
+
+ if (adapter->hw.mac.dmac_config.watchdog_timer &&
+ (!adapter->rx_itr_setting && !adapter->tx_itr_setting)) {
+ e_info(probe,
+ "Disabling DMA coalescing because interrupt throttling "
+ "is disabled\n");
+ adapter->hw.mac.dmac_config.watchdog_timer = 0;
+ TCALL(hw, mac.ops.dmac_config);
+ }
+
+ for (i = 0; i < adapter->num_q_vectors; i++) {
+ q_vector = adapter->q_vector[i];
+ q_vector->tx.work_limit = adapter->tx_work_limit;
+ q_vector->rx.work_limit = adapter->rx_work_limit;
+ if (q_vector->tx.count && !q_vector->rx.count)
+ /* tx only */
+ q_vector->itr = tx_itr_param;
+ else
+ /* rx only or mixed */
+ q_vector->itr = rx_itr_param;
+ txgbe_write_eitr(q_vector);
+ }
+
+ /*
+ * do reset here at the end to make sure EITR==0 case is handled
+ * correctly w.r.t stopping tx, and changing TXDCTL.WTHRESH settings
+ * also locks in RSC enable/disable which requires reset
+ */
+ if (need_reset)
+ txgbe_do_reset(netdev);
+
+ return 0;
+}
+
+static int txgbe_get_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd)
+{
+ union txgbe_atr_input *mask = &adapter->fdir_mask;
+ struct ethtool_rx_flow_spec *fsp =
+ (struct ethtool_rx_flow_spec *)&cmd->fs;
+ struct hlist_node *node;
+ struct txgbe_fdir_filter *rule = NULL;
+
+ /* report total rule count */
+ cmd->data = (1024 << adapter->fdir_pballoc) - 2;
+
+ hlist_for_each_entry_safe(rule, node,
+ &adapter->fdir_filter_list, fdir_node) {
+ if (fsp->location <= rule->sw_idx)
+ break;
+ }
+
+ if (!rule || fsp->location != rule->sw_idx)
+ return -EINVAL;
+
+ /* fill out the flow spec entry */
+
+ /* set flow type field */
+ switch (rule->filter.formatted.flow_type) {
+ case TXGBE_ATR_FLOW_TYPE_TCPV4:
+ fsp->flow_type = TCP_V4_FLOW;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_UDPV4:
+ fsp->flow_type = UDP_V4_FLOW;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+ fsp->flow_type = SCTP_V4_FLOW;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_IPV4:
+ fsp->flow_type = IP_USER_FLOW;
+ fsp->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4;
+ fsp->h_u.usr_ip4_spec.proto = 0;
+ fsp->m_u.usr_ip4_spec.proto = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ fsp->h_u.tcp_ip4_spec.psrc = rule->filter.formatted.src_port;
+ fsp->m_u.tcp_ip4_spec.psrc = mask->formatted.src_port;
+ fsp->h_u.tcp_ip4_spec.pdst = rule->filter.formatted.dst_port;
+ fsp->m_u.tcp_ip4_spec.pdst = mask->formatted.dst_port;
+ fsp->h_u.tcp_ip4_spec.ip4src = rule->filter.formatted.src_ip[0];
+ fsp->m_u.tcp_ip4_spec.ip4src = mask->formatted.src_ip[0];
+ fsp->h_u.tcp_ip4_spec.ip4dst = rule->filter.formatted.dst_ip[0];
+ fsp->m_u.tcp_ip4_spec.ip4dst = mask->formatted.dst_ip[0];
+ fsp->h_ext.vlan_etype = rule->filter.formatted.flex_bytes;
+ fsp->m_ext.vlan_etype = mask->formatted.flex_bytes;
+ fsp->h_ext.data[1] = htonl(rule->filter.formatted.vm_pool);
+ fsp->m_ext.data[1] = htonl(mask->formatted.vm_pool);
+ fsp->flow_type |= FLOW_EXT;
+
+ /* record action */
+ if (rule->action == TXGBE_RDB_FDIR_DROP_QUEUE)
+ fsp->ring_cookie = RX_CLS_FLOW_DISC;
+ else
+ fsp->ring_cookie = rule->action;
+
+ return 0;
+}
+
+static int txgbe_get_ethtool_fdir_all(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd,
+ u32 *rule_locs)
+{
+ struct hlist_node *node;
+ struct txgbe_fdir_filter *rule;
+ int cnt = 0;
+
+ /* report total rule count */
+ cmd->data = (1024 << adapter->fdir_pballoc) - 2;
+
+ hlist_for_each_entry_safe(rule, node,
+ &adapter->fdir_filter_list, fdir_node) {
+ if (cnt == cmd->rule_cnt)
+ return -EMSGSIZE;
+ rule_locs[cnt] = rule->sw_idx;
+ cnt++;
+ }
+
+ cmd->rule_cnt = cnt;
+
+ return 0;
+}
+
+static int txgbe_get_rss_hash_opts(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = 0;
+
+ /* Report default options for RSS on txgbe */
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case UDP_V4_FLOW:
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case SCTP_V4_FLOW:
+ case AH_ESP_V4_FLOW:
+ case AH_V4_FLOW:
+ case ESP_V4_FLOW:
+ case IPV4_FLOW:
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ case TCP_V6_FLOW:
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case UDP_V6_FLOW:
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ /* fall through */
+ case SCTP_V6_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V6_FLOW:
+ case IPV6_FLOW:
+ cmd->data |= RXH_IP_SRC | RXH_IP_DST;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int txgbe_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
+ u32 *rule_locs)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = adapter->num_rx_queues;
+ ret = 0;
+ break;
+ case ETHTOOL_GRXCLSRLCNT:
+ cmd->rule_cnt = adapter->fdir_filter_count;
+ ret = 0;
+ break;
+ case ETHTOOL_GRXCLSRULE:
+ ret = txgbe_get_ethtool_fdir_entry(adapter, cmd);
+ break;
+ case ETHTOOL_GRXCLSRLALL:
+ ret = txgbe_get_ethtool_fdir_all(adapter, cmd,
+ (u32 *)rule_locs);
+ break;
+ case ETHTOOL_GRXFH:
+ ret = txgbe_get_rss_hash_opts(adapter, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int txgbe_update_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+ struct txgbe_fdir_filter *input,
+ u16 sw_idx)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct hlist_node *node, *parent;
+ struct txgbe_fdir_filter *rule;
+ bool deleted = false;
+ s32 err;
+
+ parent = NULL;
+ rule = NULL;
+
+ hlist_for_each_entry_safe(rule, node,
+ &adapter->fdir_filter_list, fdir_node) {
+ /* hash found, or no matching entry */
+ if (rule->sw_idx >= sw_idx)
+ break;
+ parent = node;
+ }
+
+ /* if there is an old rule occupying our place remove it */
+ if (rule && (rule->sw_idx == sw_idx)) {
+ /* hardware filters are only configured when interface is up,
+ * and we should not issue filter commands while the interface
+ * is down
+ */
+ if (netif_running(adapter->netdev) &&
+ (!input || (rule->filter.formatted.bkt_hash !=
+ input->filter.formatted.bkt_hash))) {
+ err = txgbe_fdir_erase_perfect_filter(hw,
+ &rule->filter,
+ sw_idx);
+ if (err)
+ return -EINVAL;
+ }
+
+ hlist_del(&rule->fdir_node);
+ kfree(rule);
+ adapter->fdir_filter_count--;
+ deleted = true;
+ }
+
+ /* If we weren't given an input, then this was a request to delete a
+ * filter. We should return -EINVAL if the filter wasn't found, but
+ * return 0 if the rule was successfully deleted.
+ */
+ if (!input)
+ return deleted ? 0 : -EINVAL;
+
+ /* initialize node and set software index */
+ INIT_HLIST_NODE(&input->fdir_node);
+
+ /* add filter to the list */
+ if (parent)
+ hlist_add_behind(&input->fdir_node, parent);
+ else
+ hlist_add_head(&input->fdir_node,
+ &adapter->fdir_filter_list);
+
+ /* update counts */
+ adapter->fdir_filter_count++;
+
+ return 0;
+}
+
+static int txgbe_flowspec_to_flow_type(struct ethtool_rx_flow_spec *fsp,
+ u8 *flow_type)
+{
+ switch (fsp->flow_type & ~FLOW_EXT) {
+ case TCP_V4_FLOW:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+ break;
+ case UDP_V4_FLOW:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4;
+ break;
+ case SCTP_V4_FLOW:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4;
+ break;
+ case IP_USER_FLOW:
+ switch (fsp->h_u.usr_ip4_spec.proto) {
+ case IPPROTO_TCP:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+ break;
+ case IPPROTO_UDP:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_UDPV4;
+ break;
+ case IPPROTO_SCTP:
+ *flow_type = TXGBE_ATR_FLOW_TYPE_SCTPV4;
+ break;
+ case 0:
+ if (!fsp->m_u.usr_ip4_spec.proto) {
+ *flow_type = TXGBE_ATR_FLOW_TYPE_IPV4;
+ break;
+ }
+ /* fall through */
+ default:
+ return 0;
+ }
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
+static int txgbe_add_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd)
+{
+ struct ethtool_rx_flow_spec *fsp =
+ (struct ethtool_rx_flow_spec *)&cmd->fs;
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_fdir_filter *input;
+ union txgbe_atr_input mask;
+ int err;
+ u16 ptype = 0;
+
+ if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+ return -EOPNOTSUPP;
+
+ /*
+ * Don't allow programming if the action is a queue greater than
+ * the number of online Rx queues.
+ */
+ if ((fsp->ring_cookie != RX_CLS_FLOW_DISC) &&
+ (fsp->ring_cookie >= adapter->num_rx_queues))
+ return -EINVAL;
+
+ /* Don't allow indexes to exist outside of available space */
+ if (fsp->location >= ((1024 << adapter->fdir_pballoc) - 2)) {
+ e_err(drv, "Location out of range\n");
+ return -EINVAL;
+ }
+
+ input = kzalloc(sizeof(*input), GFP_ATOMIC);
+ if (!input)
+ return -ENOMEM;
+
+ memset(&mask, 0, sizeof(union txgbe_atr_input));
+
+ /* set SW index */
+ input->sw_idx = fsp->location;
+
+ /* record flow type */
+ if (!txgbe_flowspec_to_flow_type(fsp,
+ &input->filter.formatted.flow_type)) {
+ e_err(drv, "Unrecognized flow type\n");
+ goto err_out;
+ }
+
+ mask.formatted.flow_type = TXGBE_ATR_L4TYPE_IPV6_MASK |
+ TXGBE_ATR_L4TYPE_MASK;
+
+ if (input->filter.formatted.flow_type == TXGBE_ATR_FLOW_TYPE_IPV4)
+ mask.formatted.flow_type &= TXGBE_ATR_L4TYPE_IPV6_MASK;
+
+ /* Copy input into formatted structures */
+ input->filter.formatted.src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4src;
+ mask.formatted.src_ip[0] = fsp->m_u.tcp_ip4_spec.ip4src;
+ input->filter.formatted.dst_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst;
+ mask.formatted.dst_ip[0] = fsp->m_u.tcp_ip4_spec.ip4dst;
+ input->filter.formatted.src_port = fsp->h_u.tcp_ip4_spec.psrc;
+ mask.formatted.src_port = fsp->m_u.tcp_ip4_spec.psrc;
+ input->filter.formatted.dst_port = fsp->h_u.tcp_ip4_spec.pdst;
+ mask.formatted.dst_port = fsp->m_u.tcp_ip4_spec.pdst;
+
+ if (fsp->flow_type & FLOW_EXT) {
+ input->filter.formatted.vm_pool =
+ (unsigned char)ntohl(fsp->h_ext.data[1]);
+ mask.formatted.vm_pool =
+ (unsigned char)ntohl(fsp->m_ext.data[1]);
+ input->filter.formatted.flex_bytes =
+ fsp->h_ext.vlan_etype;
+ mask.formatted.flex_bytes = fsp->m_ext.vlan_etype;
+#if 0
+ /* need fix */
+ input->filter.formatted.tunnel_type =
+ (unsigned char)ntohl(fsp->h_ext.data[0]);
+ mask.formatted.tunnel_type =
+ (unsigned char)ntohl(fsp->m_ext.data[0]);
+#endif
+ }
+
+ switch (input->filter.formatted.flow_type) {
+ case TXGBE_ATR_FLOW_TYPE_TCPV4:
+ ptype = TXGBE_PTYPE_L2_IPV4_TCP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_UDPV4:
+ ptype = TXGBE_PTYPE_L2_IPV4_UDP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+ ptype = TXGBE_PTYPE_L2_IPV4_SCTP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_IPV4:
+ ptype = TXGBE_PTYPE_L2_IPV4;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_TCPV6:
+ ptype = TXGBE_PTYPE_L2_IPV6_TCP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_UDPV6:
+ ptype = TXGBE_PTYPE_L2_IPV6_UDP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_SCTPV6:
+ ptype = TXGBE_PTYPE_L2_IPV6_SCTP;
+ break;
+ case TXGBE_ATR_FLOW_TYPE_IPV6:
+ ptype = TXGBE_PTYPE_L2_IPV6;
+ break;
+ default:
+ break;
+ }
+
+ input->filter.formatted.vlan_id = htons(ptype);
+ if (mask.formatted.flow_type & TXGBE_ATR_L4TYPE_MASK)
+ mask.formatted.vlan_id = 0xFFFF;
+ else
+ mask.formatted.vlan_id = htons(0xFFF8);
+
+ /* determine if we need to drop or route the packet */
+ if (fsp->ring_cookie == RX_CLS_FLOW_DISC)
+ input->action = TXGBE_RDB_FDIR_DROP_QUEUE;
+ else
+ input->action = fsp->ring_cookie;
+
+ spin_lock(&adapter->fdir_perfect_lock);
+
+ if (hlist_empty(&adapter->fdir_filter_list)) {
+ /* save mask and program input mask into HW */
+ memcpy(&adapter->fdir_mask, &mask, sizeof(mask));
+ err = txgbe_fdir_set_input_mask(hw, &mask,
+ adapter->cloud_mode);
+ if (err) {
+ e_err(drv, "Error writing mask\n");
+ goto err_out_w_lock;
+ }
+ } else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) {
+ e_err(drv, "Hardware only supports one mask per port. To change"
+ "the mask you must first delete all the rules.\n");
+ goto err_out_w_lock;
+ }
+
+ /* apply mask and compute/store hash */
+ txgbe_atr_compute_perfect_hash(&input->filter, &mask);
+
+ /* only program filters to hardware if the net device is running, as
+ * we store the filters in the Rx buffer which is not allocated when
+ * the device is down
+ */
+ if (netif_running(adapter->netdev)) {
+ err = txgbe_fdir_write_perfect_filter(hw,
+ &input->filter, input->sw_idx,
+ (input->action == TXGBE_RDB_FDIR_DROP_QUEUE) ?
+ TXGBE_RDB_FDIR_DROP_QUEUE :
+ adapter->rx_ring[input->action]->reg_idx,
+ adapter->cloud_mode);
+ if (err)
+ goto err_out_w_lock;
+ }
+
+ txgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
+
+ spin_unlock(&adapter->fdir_perfect_lock);
+
+ return err;
+err_out_w_lock:
+ spin_unlock(&adapter->fdir_perfect_lock);
+err_out:
+ kfree(input);
+ return -EINVAL;
+}
+
+static int txgbe_del_ethtool_fdir_entry(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *cmd)
+{
+ struct ethtool_rx_flow_spec *fsp =
+ (struct ethtool_rx_flow_spec *)&cmd->fs;
+ int err;
+
+ spin_lock(&adapter->fdir_perfect_lock);
+ err = txgbe_update_ethtool_fdir_entry(adapter, NULL, fsp->location);
+ spin_unlock(&adapter->fdir_perfect_lock);
+
+ return err;
+}
+
+#define UDP_RSS_FLAGS (TXGBE_FLAG2_RSS_FIELD_IPV4_UDP | \
+ TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+static int txgbe_set_rss_hash_opt(struct txgbe_adapter *adapter,
+ struct ethtool_rxnfc *nfc)
+{
+ u32 flags2 = adapter->flags2;
+
+ /*
+ * RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3))
+ return -EINVAL;
+
+ switch (nfc->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ !(nfc->data & RXH_L4_B_0_1) ||
+ !(nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ case UDP_V4_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV4_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV4_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case UDP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST))
+ return -EINVAL;
+ switch (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ flags2 &= ~TXGBE_FLAG2_RSS_FIELD_IPV6_UDP;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ flags2 |= TXGBE_FLAG2_RSS_FIELD_IPV6_UDP;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case AH_ESP_V4_FLOW:
+ case AH_V4_FLOW:
+ case ESP_V4_FLOW:
+ case SCTP_V4_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V6_FLOW:
+ case SCTP_V6_FLOW:
+ if (!(nfc->data & RXH_IP_SRC) ||
+ !(nfc->data & RXH_IP_DST) ||
+ (nfc->data & RXH_L4_B_0_1) ||
+ (nfc->data & RXH_L4_B_2_3))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* if we changed something we need to update flags */
+ if (flags2 != adapter->flags2) {
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 mrqc;
+
+ mrqc = rd32(hw, TXGBE_RDB_RA_CTL);
+
+ if ((flags2 & UDP_RSS_FLAGS) &&
+ !(adapter->flags2 & UDP_RSS_FLAGS))
+ e_warn(drv, "enabling UDP RSS: fragmented packets"
+ " may arrive out of order to the stack above\n");
+
+ adapter->flags2 = flags2;
+
+ /* Perform hash on these packet types */
+ mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4
+ | TXGBE_RDB_RA_CTL_RSS_IPV4_TCP
+ | TXGBE_RDB_RA_CTL_RSS_IPV6
+ | TXGBE_RDB_RA_CTL_RSS_IPV6_TCP;
+
+ mrqc &= ~(TXGBE_RDB_RA_CTL_RSS_IPV4_UDP |
+ TXGBE_RDB_RA_CTL_RSS_IPV6_UDP);
+
+ if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+ mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP;
+
+ if (flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+ mrqc |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP;
+
+ wr32(hw, TXGBE_RDB_RA_CTL, mrqc);
+ }
+
+ return 0;
+}
+
+static int txgbe_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ int ret = -EOPNOTSUPP;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXCLSRLINS:
+ ret = txgbe_add_ethtool_fdir_entry(adapter, cmd);
+ break;
+ case ETHTOOL_SRXCLSRLDEL:
+ ret = txgbe_del_ethtool_fdir_entry(adapter, cmd);
+ break;
+ case ETHTOOL_SRXFH:
+ ret = txgbe_set_rss_hash_opt(adapter, cmd);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static int txgbe_rss_indir_tbl_max(struct txgbe_adapter *adapter)
+{
+ return 64;
+}
+
+
+static u32 txgbe_get_rxfh_key_size(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ return sizeof(adapter->rss_key);
+}
+
+static u32 txgbe_rss_indir_size(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ return txgbe_rss_indir_tbl_entries(adapter);
+}
+
+static void txgbe_get_reta(struct txgbe_adapter *adapter, u32 *indir)
+{
+ int i, reta_size = txgbe_rss_indir_tbl_entries(adapter);
+
+ for (i = 0; i < reta_size; i++)
+ indir[i] = adapter->rss_indir_tbl[i];
+}
+
+static int txgbe_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+ u8 *hfunc)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ if (hfunc)
+ *hfunc = ETH_RSS_HASH_TOP;
+
+ if (indir)
+ txgbe_get_reta(adapter, indir);
+
+ if (key)
+ memcpy(key, adapter->rss_key, txgbe_get_rxfh_key_size(netdev));
+
+ return 0;
+}
+
+static int txgbe_set_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key, const u8 hfunc)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ int i;
+ u32 reta_entries = txgbe_rss_indir_tbl_entries(adapter);
+
+ if (hfunc)
+ return -EINVAL;
+
+ /* Fill out the redirection table */
+ if (indir) {
+ int max_queues = min_t(int, adapter->num_rx_queues,
+ txgbe_rss_indir_tbl_max(adapter));
+
+ /*Allow at least 2 queues w/ SR-IOV.*/
+ if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) &&
+ (max_queues < 2))
+ max_queues = 2;
+
+ /* Verify user input. */
+ for (i = 0; i < reta_entries; i++)
+ if (indir[i] >= max_queues)
+ return -EINVAL;
+
+ for (i = 0; i < reta_entries; i++)
+ adapter->rss_indir_tbl[i] = indir[i];
+ }
+
+ /* Fill out the rss hash key */
+ if (key)
+ memcpy(adapter->rss_key, key, txgbe_get_rxfh_key_size(netdev));
+
+ txgbe_store_reta(adapter);
+
+ return 0;
+}
+
+static int txgbe_get_ts_info(struct net_device *dev,
+ struct ethtool_ts_info *info)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+
+ /* we always support timestamping disabled */
+ info->rx_filters = 1 << HWTSTAMP_FILTER_NONE;
+
+ info->so_timestamping =
+ SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ if (adapter->ptp_clock)
+ info->phc_index = ptp_clock_index(adapter->ptp_clock);
+ else
+ info->phc_index = -1;
+
+ info->tx_types =
+ (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON);
+
+ info->rx_filters |=
+ (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_EVENT) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+
+ return 0;
+}
+
+static unsigned int txgbe_max_channels(struct txgbe_adapter *adapter)
+{
+ unsigned int max_combined;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
+ /* We only support one q_vector without MSI-X */
+ max_combined = 1;
+ } else if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) {
+ /* SR-IOV currently only allows one queue on the PF */
+ max_combined = 1;
+ } else if (tcs > 1) {
+ /* For DCB report channels per traffic class */
+ if (tcs > 4) {
+ /* 8 TC w/ 8 queues per TC */
+ max_combined = 8;
+ } else {
+ /* 4 TC w/ 16 queues per TC */
+ max_combined = 16;
+ }
+ } else if (adapter->atr_sample_rate) {
+ /* support up to 64 queues with ATR */
+ max_combined = TXGBE_MAX_FDIR_INDICES;
+ } else {
+ /* support up to max allowed queues with RSS */
+ max_combined = txgbe_max_rss_indices(adapter);
+ }
+
+ return max_combined;
+}
+
+static void txgbe_get_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+
+ /* report maximum channels */
+ ch->max_combined = txgbe_max_channels(adapter);
+
+ /* report info for other vector */
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+ ch->max_other = NON_Q_VECTORS;
+ ch->other_count = NON_Q_VECTORS;
+ }
+
+ /* record RSS queues */
+ ch->combined_count = adapter->ring_feature[RING_F_RSS].indices;
+
+ /* nothing else to report if RSS is disabled */
+ if (ch->combined_count == 1)
+ return;
+
+ /* we do not support ATR queueing if SR-IOV is enabled */
+ if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)
+ return;
+
+ /* same thing goes for being DCB enabled */
+ if (netdev_get_num_tc(dev) > 1)
+ return;
+
+ /* if ATR is disabled we can exit */
+ if (!adapter->atr_sample_rate)
+ return;
+
+ /* report flow director queues as maximum channels */
+ ch->combined_count = adapter->ring_feature[RING_F_FDIR].indices;
+}
+
+static int txgbe_set_channels(struct net_device *dev,
+ struct ethtool_channels *ch)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ unsigned int count = ch->combined_count;
+ u8 max_rss_indices = txgbe_max_rss_indices(adapter);
+
+ /* verify they are not requesting separate vectors */
+ if (!count || ch->rx_count || ch->tx_count)
+ return -EINVAL;
+
+ /* verify other_count has not changed */
+ if (ch->other_count != NON_Q_VECTORS)
+ return -EINVAL;
+
+ /* verify the number of channels does not exceed hardware limits */
+ if (count > txgbe_max_channels(adapter))
+ return -EINVAL;
+
+ /* update feature limits from largest to smallest supported values */
+ adapter->ring_feature[RING_F_FDIR].limit = count;
+
+ /* cap RSS limit */
+ if (count > max_rss_indices)
+ count = max_rss_indices;
+ adapter->ring_feature[RING_F_RSS].limit = count;
+
+ /* use setup TC to update any traffic class queue mapping */
+ return txgbe_setup_tc(dev, netdev_get_num_tc(dev));
+}
+
+static int txgbe_get_module_info(struct net_device *dev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 status;
+ u8 sff8472_rev, addr_mode;
+ bool page_swap = false;
+
+ /* Check whether we support SFF-8472 or not */
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_SFF_8472_COMP,
+ &sff8472_rev);
+ if (status != 0)
+ return -EIO;
+
+ /* addressing mode is not supported */
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_SFF_8472_SWAP,
+ &addr_mode);
+ if (status != 0)
+ return -EIO;
+
+ if (addr_mode & TXGBE_SFF_ADDRESSING_MODE) {
+ e_err(drv, "Address change required to access page 0xA2, "
+ "but not supported. Please report the module type to the "
+ "driver maintainers.\n");
+ page_swap = true;
+ }
+
+ if (sff8472_rev == TXGBE_SFF_SFF_8472_UNSUP || page_swap) {
+ /* We have a SFP, but it does not support SFF-8472 */
+ modinfo->type = ETH_MODULE_SFF_8079;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
+ } else {
+ /* We have a SFP which supports a revision of SFF-8472. */
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ }
+
+ return 0;
+}
+
+static int txgbe_get_module_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *ee,
+ u8 *data)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+ u8 databyte = 0xFF;
+ int i = 0;
+
+ if (ee->len == 0)
+ return -EINVAL;
+
+ for (i = ee->offset; i < ee->offset + ee->len; i++) {
+ /* I2C reads can take long time */
+ if (test_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+ return -EBUSY;
+
+ if (i < ETH_MODULE_SFF_8079_LEN)
+ status = TCALL(hw, phy.ops.read_i2c_eeprom, i,
+ &databyte);
+ else
+ status = TCALL(hw, phy.ops.read_i2c_sff8472, i,
+ &databyte);
+
+ if (status != 0)
+ return -EIO;
+
+ data[i - ee->offset] = databyte;
+ }
+
+ return 0;
+}
+
+static int txgbe_get_eee(struct net_device *netdev, struct ethtool_eee *edata)
+{
+ return 0;
+}
+
+static int txgbe_set_eee(struct net_device *netdev, struct ethtool_eee *edata)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ struct ethtool_eee eee_data;
+ s32 ret_val;
+
+ if (!(hw->mac.ops.setup_eee &&
+ (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE)))
+ return -EOPNOTSUPP;
+
+ memset(&eee_data, 0, sizeof(struct ethtool_eee));
+
+ ret_val = txgbe_get_eee(netdev, &eee_data);
+ if (ret_val)
+ return ret_val;
+
+ if (eee_data.eee_enabled && !edata->eee_enabled) {
+ if (eee_data.tx_lpi_enabled != edata->tx_lpi_enabled) {
+ e_dev_err("Setting EEE tx-lpi is not supported\n");
+ return -EINVAL;
+ }
+
+ if (eee_data.tx_lpi_timer != edata->tx_lpi_timer) {
+ e_dev_err("Setting EEE Tx LPI timer is not "
+ "supported\n");
+ return -EINVAL;
+ }
+
+ if (eee_data.advertised != edata->advertised) {
+ e_dev_err("Setting EEE advertised speeds is not "
+ "supported\n");
+ return -EINVAL;
+ }
+
+ }
+
+ if (eee_data.eee_enabled != edata->eee_enabled) {
+
+ if (edata->eee_enabled)
+ adapter->flags2 |= TXGBE_FLAG2_EEE_ENABLED;
+ else
+ adapter->flags2 &= ~TXGBE_FLAG2_EEE_ENABLED;
+
+ /* reset link */
+ if (netif_running(netdev))
+ txgbe_reinit_locked(adapter);
+ else
+ txgbe_reset(adapter);
+ }
+
+ return 0;
+}
+
+static int txgbe_set_flash(struct net_device *netdev, struct ethtool_flash *ef)
+{
+ int ret;
+ const struct firmware *fw;
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ ret = request_firmware(&fw, ef->data, &netdev->dev);
+ if (ret < 0)
+ return ret;
+
+ if (txgbe_mng_present(&adapter->hw)) {
+ ret = txgbe_upgrade_flash_hostif(&adapter->hw, ef->region,
+ fw->data, fw->size);
+ } else
+ ret = -EOPNOTSUPP;
+
+ release_firmware(fw);
+ if (!ret)
+ dev_info(&netdev->dev,
+ "loaded firmware %s, reload txgbe driver\n", ef->data);
+ return ret;
+}
+
+static struct ethtool_ops txgbe_ethtool_ops = {
+ .get_link_ksettings = txgbe_get_link_ksettings,
+ .set_link_ksettings = txgbe_set_link_ksettings,
+ .get_drvinfo = txgbe_get_drvinfo,
+ .get_regs_len = txgbe_get_regs_len,
+ .get_regs = txgbe_get_regs,
+ .get_wol = txgbe_get_wol,
+ .set_wol = txgbe_set_wol,
+ .nway_reset = txgbe_nway_reset,
+ .get_link = ethtool_op_get_link,
+ .get_eeprom_len = txgbe_get_eeprom_len,
+ .get_eeprom = txgbe_get_eeprom,
+ .set_eeprom = txgbe_set_eeprom,
+ .get_ringparam = txgbe_get_ringparam,
+ .set_ringparam = txgbe_set_ringparam,
+ .get_pauseparam = txgbe_get_pauseparam,
+ .set_pauseparam = txgbe_set_pauseparam,
+ .get_msglevel = txgbe_get_msglevel,
+ .set_msglevel = txgbe_set_msglevel,
+ .self_test = txgbe_diag_test,
+ .get_strings = txgbe_get_strings,
+ .set_phys_id = txgbe_set_phys_id,
+ .get_sset_count = txgbe_get_sset_count,
+ .get_ethtool_stats = txgbe_get_ethtool_stats,
+ .get_coalesce = txgbe_get_coalesce,
+ .set_coalesce = txgbe_set_coalesce,
+ .get_rxnfc = txgbe_get_rxnfc,
+ .set_rxnfc = txgbe_set_rxnfc,
+ .get_eee = txgbe_get_eee,
+ .set_eee = txgbe_set_eee,
+ .get_channels = txgbe_get_channels,
+ .set_channels = txgbe_set_channels,
+ .get_module_info = txgbe_get_module_info,
+ .get_module_eeprom = txgbe_get_module_eeprom,
+ .get_ts_info = txgbe_get_ts_info,
+ .get_rxfh_indir_size = txgbe_rss_indir_size,
+ .get_rxfh_key_size = txgbe_get_rxfh_key_size,
+ .get_rxfh = txgbe_get_rxfh,
+ .set_rxfh = txgbe_set_rxfh,
+ .flash_device = txgbe_set_flash,
+};
+
+void txgbe_set_ethtool_ops(struct net_device *netdev)
+{
+ netdev->ethtool_ops = &txgbe_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
new file mode 100644
index 0000000000000..17e366ebd6fe4
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
@@ -0,0 +1,7072 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_82599.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#include "txgbe_type.h"
+#include "txgbe_hw.h"
+#include "txgbe_phy.h"
+#include "txgbe.h"
+
+
+#define TXGBE_SP_MAX_TX_QUEUES 128
+#define TXGBE_SP_MAX_RX_QUEUES 128
+#define TXGBE_SP_RAR_ENTRIES 128
+#define TXGBE_SP_MC_TBL_SIZE 128
+#define TXGBE_SP_VFT_TBL_SIZE 128
+#define TXGBE_SP_RX_PB_SIZE 512
+
+STATIC s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw);
+STATIC void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw);
+STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr);
+STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
+ u16 *san_mac_offset);
+
+STATIC s32 txgbe_setup_copper_link(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete);
+s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
+ bool *link_up, bool link_up_wait_to_complete);
+
+
+u32 rd32_ephy(struct txgbe_hw *hw, u32 addr)
+{
+ unsigned int portRegOffset;
+ u32 data;
+
+ /* Set the LAN port indicator to portRegOffset[1] */
+ /* 1st, write the regOffset to IDA_ADDR register */
+ portRegOffset = TXGBE_ETHPHY_IDA_ADDR;
+ wr32(hw, portRegOffset, addr);
+
+ /* 2nd, read the data from IDA_DATA register */
+ portRegOffset = TXGBE_ETHPHY_IDA_DATA;
+ data = rd32(hw, portRegOffset);
+ return data;
+}
+
+
+u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr)
+{
+ unsigned int portRegOffset;
+ u32 data;
+ /* Set the LAN port indicator to portRegOffset[1] */
+ /* 1st, write the regOffset to IDA_ADDR register */
+ portRegOffset = TXGBE_XPCS_IDA_ADDR;
+ wr32(hw, portRegOffset, addr);
+
+ /* 2nd, read the data from IDA_DATA register */
+ portRegOffset = TXGBE_XPCS_IDA_DATA;
+ data = rd32(hw, portRegOffset);
+
+ return data;
+}
+
+
+void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+ unsigned int portRegOffset;
+
+ /* Set the LAN port indicator to portRegOffset[1] */
+ /* 1st, write the regOffset to IDA_ADDR register */
+ portRegOffset = TXGBE_ETHPHY_IDA_ADDR;
+ wr32(hw, portRegOffset, addr);
+
+ /* 2nd, read the data from IDA_DATA register */
+ portRegOffset = TXGBE_ETHPHY_IDA_DATA;
+ wr32(hw, portRegOffset, data);
+}
+
+void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+ unsigned int portRegOffset;
+
+ /* Set the LAN port indicator to portRegOffset[1] */
+ /* 1st, write the regOffset to IDA_ADDR register */
+ portRegOffset = TXGBE_XPCS_IDA_ADDR;
+ wr32(hw, portRegOffset, addr);
+
+ /* 2nd, read the data from IDA_DATA register */
+ portRegOffset = TXGBE_XPCS_IDA_DATA;
+ wr32(hw, portRegOffset, data);
+}
+
+/**
+ * txgbe_get_pcie_msix_count - Gets MSI-X vector count
+ * @hw: pointer to hardware structure
+ *
+ * Read PCIe configuration space, and get the MSI-X vector count from
+ * the capabilities table.
+ **/
+u16 txgbe_get_pcie_msix_count(struct txgbe_hw *hw)
+{
+ u16 msix_count = 1;
+ u16 max_msix_count;
+ u32 pos;
+
+ DEBUGFUNC("\n");
+
+ max_msix_count = TXGBE_MAX_MSIX_VECTORS_SAPPHIRE;
+ pos = pci_find_capability(((struct txgbe_adapter *)hw->back)->pdev, PCI_CAP_ID_MSIX);
+ if (!pos)
+ return msix_count;
+ pci_read_config_word(((struct txgbe_adapter *)hw->back)->pdev,
+ pos + PCI_MSIX_FLAGS, &msix_count);
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ msix_count = 0;
+ msix_count &= TXGBE_PCIE_MSIX_TBL_SZ_MASK;
+
+ /* MSI-X count is zero-based in HW */
+ msix_count++;
+
+ if (msix_count > max_msix_count)
+ msix_count = max_msix_count;
+
+ return msix_count;
+}
+
+/**
+ * txgbe_init_hw - Generic hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ * Initialize the hardware by resetting the hardware, filling the bus info
+ * structure and media type, clears all on chip counters, initializes receive
+ * address registers, multicast table, VLAN filter table, calls routine to set
+ * up link and flow control settings, and leaves transmit and receive units
+ * disabled and uninitialized
+ **/
+s32 txgbe_init_hw(struct txgbe_hw *hw)
+{
+ s32 status;
+
+ DEBUGFUNC("\n");
+
+ /* Reset the hardware */
+ status = TCALL(hw, mac.ops.reset_hw);
+
+ if (status == 0) {
+ /* Start the HW */
+ status = TCALL(hw, mac.ops.start_hw);
+ }
+
+ return status;
+}
+
+
+/**
+ * txgbe_clear_hw_cntrs - Generic clear hardware counters
+ * @hw: pointer to hardware structure
+ *
+ * Clears all hardware statistics counters by reading them from the hardware
+ * Statistics counters are clear on read.
+ **/
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
+{
+ u16 i = 0;
+
+ DEBUGFUNC("\n");
+
+ rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW);
+ for (i = 0; i < 8; i++)
+ rd32(hw, TXGBE_RDB_MPCNT(i));
+
+ rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW);
+ rd32(hw, TXGBE_RDB_LXONTXC);
+ rd32(hw, TXGBE_RDB_LXOFFTXC);
+ rd32(hw, TXGBE_MAC_LXONRXC);
+ rd32(hw, TXGBE_MAC_LXOFFRXC);
+
+ for (i = 0; i < 8; i++) {
+ rd32(hw, TXGBE_RDB_PXONTXC(i));
+ rd32(hw, TXGBE_RDB_PXOFFTXC(i));
+ rd32(hw, TXGBE_MAC_PXONRXC(i));
+ wr32m(hw, TXGBE_MMC_CONTROL, TXGBE_MMC_CONTROL_UP, i<<16);
+ rd32(hw, TXGBE_MAC_PXOFFRXC);
+ }
+ for (i = 0; i < 8; i++)
+ rd32(hw, TXGBE_RDB_PXON2OFFCNT(i));
+ for (i = 0; i < 128; i++) {
+ wr32(hw, TXGBE_PX_MPRC(i), 0);
+ }
+
+ rd32(hw, TXGBE_PX_GPRC);
+ rd32(hw, TXGBE_PX_GPTC);
+ rd32(hw, TXGBE_PX_GORC_MSB);
+ rd32(hw, TXGBE_PX_GOTC_MSB);
+
+ rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW);
+ rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD);
+ rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD);
+ rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW);
+ rd32(hw, TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW);
+ rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW);
+ rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW);
+ rd32(hw, TXGBE_RDM_DRP_PKT);
+ return 0;
+}
+
+/**
+ * txgbe_device_supports_autoneg_fc - Check if device supports autonegotiation
+ * of flow control
+ * @hw: pointer to hardware structure
+ *
+ * This function returns true if the device supports flow control
+ * autonegotiation, and false if it does not.
+ *
+ **/
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw)
+{
+ bool supported = false;
+ u32 speed;
+ bool link_up;
+ u8 device_type = hw->subsystem_id & 0xF0;
+
+ DEBUGFUNC("\n");
+
+ switch (hw->phy.media_type) {
+ case txgbe_media_type_fiber:
+ TCALL(hw, mac.ops.check_link, &speed, &link_up, false);
+ /* if link is down, assume supported */
+ if (link_up)
+ supported = speed == TXGBE_LINK_SPEED_1GB_FULL ?
+ true : false;
+ else
+ supported = true;
+ break;
+ case txgbe_media_type_backplane:
+ supported = (device_type != TXGBE_ID_MAC_XAUI &&
+ device_type != TXGBE_ID_MAC_SGMII);
+ break;
+ case txgbe_media_type_copper:
+ /* only some copper devices support flow control autoneg */
+ supported = true;
+ break;
+ default:
+ break;
+ }
+
+ ERROR_REPORT2(TXGBE_ERROR_UNSUPPORTED,
+ "Device %x does not support flow control autoneg",
+ hw->device_id);
+ return supported;
+}
+
+/**
+ * txgbe_setup_fc - Set up flow control
+ * @hw: pointer to hardware structure
+ *
+ * Called at init time to set up flow control.
+ **/
+s32 txgbe_setup_fc(struct txgbe_hw *hw)
+{
+ s32 ret_val = 0;
+ u32 pcap = 0;
+ u32 value = 0;
+ u32 pcap_backplane = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Validate the requested mode */
+ if (hw->fc.strict_ieee && hw->fc.requested_mode == txgbe_fc_rx_pause) {
+ ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED,
+ "txgbe_fc_rx_pause not valid in strict IEEE mode\n");
+ ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+
+ /*
+ * 10gig parts do not have a word in the EEPROM to determine the
+ * default flow control setting, so we explicitly set it to full.
+ */
+ if (hw->fc.requested_mode == txgbe_fc_default)
+ hw->fc.requested_mode = txgbe_fc_full;
+
+ /*
+ * Set up the 1G and 10G flow control advertisement registers so the
+ * HW will be able to do fc autoneg once the cable is plugged in. If
+ * we link at 10G, the 1G advertisement is harmless and vice versa.
+ */
+
+ /*
+ * The possible values of fc.requested_mode are:
+ * 0: Flow control is completely disabled
+ * 1: Rx flow control is enabled (we can receive pause frames,
+ * but not send pause frames).
+ * 2: Tx flow control is enabled (we can send pause frames but
+ * we do not support receiving pause frames).
+ * 3: Both Rx and Tx flow control (symmetric) are enabled.
+ * other: Invalid.
+ */
+ switch (hw->fc.requested_mode) {
+ case txgbe_fc_none:
+ /* Flow control completely disabled by software override. */
+ break;
+ case txgbe_fc_tx_pause:
+ /*
+ * Tx Flow control is enabled, and Rx Flow control is
+ * disabled by software override.
+ */
+ pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM;
+ pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+ break;
+ case txgbe_fc_rx_pause:
+ /*
+ * Rx Flow control is enabled and Tx Flow control is
+ * disabled by software override. Since there really
+ * isn't a way to advertise that we are capable of RX
+ * Pause ONLY, we will advertise that we support both
+ * symmetric and asymmetric Rx PAUSE, as such we fall
+ * through to the fc_full statement. Later, we will
+ * disable the adapter's ability to send PAUSE frames.
+ */
+ case txgbe_fc_full:
+ /* Flow control (both Rx and Tx) is enabled by SW override. */
+ pcap |= TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM |
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM;
+ pcap_backplane |= TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM |
+ TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+ break;
+ default:
+ ERROR_REPORT1(TXGBE_ERROR_ARGUMENT,
+ "Flow control param set incorrectly\n");
+ ret_val = TXGBE_ERR_CONFIG;
+ goto out;
+ break;
+ }
+
+ /*
+ * Enable auto-negotiation between the MAC & PHY;
+ * the MAC will advertise clause 37 flow control.
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV);
+ value = (value & ~(TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM |
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM)) | pcap;
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV, value);
+
+ /*
+ * AUTOC restart handles negotiation of 1G and 10G on backplane
+ * and copper.
+ */
+ if (hw->phy.media_type == txgbe_media_type_backplane) {
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1);
+ value = (value & ~(TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM |
+ TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM)) |
+ pcap_backplane;
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1, value);
+
+ } else if ((hw->phy.media_type == txgbe_media_type_copper) &&
+ (txgbe_device_supports_autoneg_fc(hw))) {
+ ret_val = txgbe_set_phy_pause_advertisement(hw, pcap_backplane);
+ }
+out:
+ return ret_val;
+}
+
+/**
+ * txgbe_read_pba_string - Reads part number string from EEPROM
+ * @hw: pointer to hardware structure
+ * @pba_num: stores the part number string from the EEPROM
+ * @pba_num_size: part number string buffer length
+ *
+ * Reads the part number string from the EEPROM.
+ **/
+s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num,
+ u32 pba_num_size)
+{
+ s32 ret_val;
+ u16 data;
+ u16 pba_ptr;
+ u16 offset;
+ u16 length;
+
+ DEBUGFUNC("\n");
+
+ if (pba_num == NULL) {
+ DEBUGOUT("PBA string buffer was null\n");
+ return TXGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ ret_val = TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR,
+ &data);
+ if (ret_val) {
+ DEBUGOUT("NVM Read Error\n");
+ return ret_val;
+ }
+
+ ret_val = TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR,
+ &pba_ptr);
+ if (ret_val) {
+ DEBUGOUT("NVM Read Error\n");
+ return ret_val;
+ }
+
+ /*
+ * if data is not ptr guard the PBA must be in legacy format which
+ * means pba_ptr is actually our second data word for the PBA number
+ * and we can decode it into an ascii string
+ */
+ if (data != TXGBE_PBANUM_PTR_GUARD) {
+ DEBUGOUT("NVM PBA number is not stored as string\n");
+
+ /* we will need 11 characters to store the PBA */
+ if (pba_num_size < 11) {
+ DEBUGOUT("PBA string buffer too small\n");
+ return TXGBE_ERR_NO_SPACE;
+ }
+
+ /* extract hex string from data and pba_ptr */
+ pba_num[0] = (data >> 12) & 0xF;
+ pba_num[1] = (data >> 8) & 0xF;
+ pba_num[2] = (data >> 4) & 0xF;
+ pba_num[3] = data & 0xF;
+ pba_num[4] = (pba_ptr >> 12) & 0xF;
+ pba_num[5] = (pba_ptr >> 8) & 0xF;
+ pba_num[6] = '-';
+ pba_num[7] = 0;
+ pba_num[8] = (pba_ptr >> 4) & 0xF;
+ pba_num[9] = pba_ptr & 0xF;
+
+ /* put a null character on the end of our string */
+ pba_num[10] = '\0';
+
+ /* switch all the data but the '-' to hex char */
+ for (offset = 0; offset < 10; offset++) {
+ if (pba_num[offset] < 0xA)
+ pba_num[offset] += '0';
+ else if (pba_num[offset] < 0x10)
+ pba_num[offset] += 'A' - 0xA;
+ }
+
+ return 0;
+ }
+
+ ret_val = TCALL(hw, eeprom.ops.read, pba_ptr, &length);
+ if (ret_val) {
+ DEBUGOUT("NVM Read Error\n");
+ return ret_val;
+ }
+
+ if (length == 0xFFFF || length == 0) {
+ DEBUGOUT("NVM PBA number section invalid length\n");
+ return TXGBE_ERR_PBA_SECTION;
+ }
+
+ /* check if pba_num buffer is big enough */
+ if (pba_num_size < (((u32)length * 2) - 1)) {
+ DEBUGOUT("PBA string buffer too small\n");
+ return TXGBE_ERR_NO_SPACE;
+ }
+
+ /* trim pba length from start of string */
+ pba_ptr++;
+ length--;
+
+ for (offset = 0; offset < length; offset++) {
+ ret_val = TCALL(hw, eeprom.ops.read, pba_ptr + offset, &data);
+ if (ret_val) {
+ DEBUGOUT("NVM Read Error\n");
+ return ret_val;
+ }
+ pba_num[offset * 2] = (u8)(data >> 8);
+ pba_num[(offset * 2) + 1] = (u8)(data & 0xFF);
+ }
+ pba_num[offset * 2] = '\0';
+
+ return 0;
+}
+
+/**
+ * txgbe_get_mac_addr - Generic get MAC address
+ * @hw: pointer to hardware structure
+ * @mac_addr: Adapter MAC address
+ *
+ * Reads the adapter's MAC address from first Receive Address Register (RAR0)
+ * A reset of the adapter must be performed prior to calling this function
+ * in order for the MAC address to have been loaded from the EEPROM into RAR0
+ **/
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr)
+{
+ u32 rar_high;
+ u32 rar_low;
+ u16 i;
+
+ DEBUGFUNC("\n");
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 0);
+ rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H);
+ rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L);
+
+ for (i = 0; i < 2; i++)
+ mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
+
+ for (i = 0; i < 4; i++)
+ mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8);
+
+ return 0;
+}
+
+/**
+ * txgbe_set_pci_config_data - Generic store PCI bus info
+ * @hw: pointer to hardware structure
+ * @link_status: the link status returned by the PCI config space
+ *
+ * Stores the PCI bus info (speed, width, type) within the txgbe_hw structure
+ **/
+void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status)
+{
+ if (hw->bus.type == txgbe_bus_type_unknown)
+ hw->bus.type = txgbe_bus_type_pci_express;
+
+ switch (link_status & TXGBE_PCI_LINK_WIDTH) {
+ case TXGBE_PCI_LINK_WIDTH_1:
+ hw->bus.width = txgbe_bus_width_pcie_x1;
+ break;
+ case TXGBE_PCI_LINK_WIDTH_2:
+ hw->bus.width = txgbe_bus_width_pcie_x2;
+ break;
+ case TXGBE_PCI_LINK_WIDTH_4:
+ hw->bus.width = txgbe_bus_width_pcie_x4;
+ break;
+ case TXGBE_PCI_LINK_WIDTH_8:
+ hw->bus.width = txgbe_bus_width_pcie_x8;
+ break;
+ default:
+ hw->bus.width = txgbe_bus_width_unknown;
+ break;
+ }
+
+ switch (link_status & TXGBE_PCI_LINK_SPEED) {
+ case TXGBE_PCI_LINK_SPEED_2500:
+ hw->bus.speed = txgbe_bus_speed_2500;
+ break;
+ case TXGBE_PCI_LINK_SPEED_5000:
+ hw->bus.speed = txgbe_bus_speed_5000;
+ break;
+ case TXGBE_PCI_LINK_SPEED_8000:
+ hw->bus.speed = txgbe_bus_speed_8000;
+ break;
+ default:
+ hw->bus.speed = txgbe_bus_speed_unknown;
+ break;
+ }
+
+}
+
+/**
+ * txgbe_get_bus_info - Generic set PCI bus info
+ * @hw: pointer to hardware structure
+ *
+ * Gets the PCI bus info (speed, width, type) then calls helper function to
+ * store this data within the txgbe_hw structure.
+ **/
+s32 txgbe_get_bus_info(struct txgbe_hw *hw)
+{
+ u16 link_status;
+
+ DEBUGFUNC("\n");
+
+ /* Get the negotiated link width and speed from PCI config space */
+ link_status = txgbe_read_pci_cfg_word(hw, TXGBE_PCI_LINK_STATUS);
+
+ txgbe_set_pci_config_data(hw, link_status);
+
+ return 0;
+}
+
+/**
+ * txgbe_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
+ * @hw: pointer to the HW structure
+ *
+ * Determines the LAN function id by reading memory-mapped registers
+ * and swaps the port value if requested.
+ **/
+void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw)
+{
+ struct txgbe_bus_info *bus = &hw->bus;
+ u32 reg;
+
+ DEBUGFUNC("\n");
+
+ reg = rd32(hw, TXGBE_CFG_PORT_ST);
+ bus->lan_id = TXGBE_CFG_PORT_ST_LAN_ID(reg);
+
+ /* check for a port swap */
+ reg = rd32(hw, TXGBE_MIS_PWR);
+ if (TXGBE_MIS_PWR_LAN_ID_1 == TXGBE_MIS_PWR_LAN_ID(reg))
+ bus->func = 0;
+ else
+ bus->func = bus->lan_id;
+}
+
+/**
+ * txgbe_stop_adapter - Generic stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ * Sets the adapter_stopped flag within txgbe_hw struct. Clears interrupts,
+ * disables transmit and receive units. The adapter_stopped flag is used by
+ * the shared code and drivers to determine if the adapter is in a stopped
+ * state and should not touch the hardware.
+ **/
+s32 txgbe_stop_adapter(struct txgbe_hw *hw)
+{
+ u16 i;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Set the adapter_stopped flag so other driver functions stop touching
+ * the hardware
+ */
+ hw->adapter_stopped = true;
+
+ /* Disable the receive unit */
+ TCALL(hw, mac.ops.disable_rx);
+
+ /* Set interrupt mask to stop interrupts from being generated */
+ txgbe_intr_disable(hw, TXGBE_INTR_ALL);
+
+ /* Clear any pending interrupts, flush previous writes */
+ wr32(hw, TXGBE_PX_MISC_IC, 0xffffffff);
+ wr32(hw, TXGBE_BME_CTL, 0x3);
+
+ /* Disable the transmit unit. Each queue must be disabled. */
+ for (i = 0; i < hw->mac.max_tx_queues; i++) {
+ wr32m(hw, TXGBE_PX_TR_CFG(i),
+ TXGBE_PX_TR_CFG_SWFLSH | TXGBE_PX_TR_CFG_ENABLE,
+ TXGBE_PX_TR_CFG_SWFLSH);
+ }
+
+ /* Disable the receive unit by stopping each queue */
+ for (i = 0; i < hw->mac.max_rx_queues; i++) {
+ wr32m(hw, TXGBE_PX_RR_CFG(i),
+ TXGBE_PX_RR_CFG_RR_EN, 0);
+ }
+
+ /* flush all queues disables */
+ TXGBE_WRITE_FLUSH(hw);
+
+ /*
+ * Prevent the PCI-E bus from hanging by disabling PCI-E master
+ * access and verify no pending requests
+ */
+ return txgbe_disable_pcie_master(hw);
+}
+
+/**
+ * txgbe_led_on - Turns on the software controllable LEDs.
+ * @hw: pointer to hardware structure
+ * @index: led number to turn on
+ **/
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index)
+{
+ u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+ u16 value = 0;
+ DEBUGFUNC("\n");
+
+ if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) {
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value | 0x3);
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value | 0x3);
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value | 0x3);
+ }
+ /* To turn on the LED, set mode to ON. */
+ led_reg |= index | (index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT);
+ wr32(hw, TXGBE_CFG_LED_CTL, led_reg);
+ TXGBE_WRITE_FLUSH(hw);
+
+ return 0;
+}
+
+/**
+ * txgbe_led_off - Turns off the software controllable LEDs.
+ * @hw: pointer to hardware structure
+ * @index: led number to turn off
+ **/
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index)
+{
+ u32 led_reg = rd32(hw, TXGBE_CFG_LED_CTL);
+ u16 value = 0;
+ DEBUGFUNC("\n");
+
+ if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_XAUI) {
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF021, value & 0xFFFC);
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF022, value & 0xFFFC);
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, &value);
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xF023, value & 0xFFFC);
+ }
+
+ /* To turn off the LED, set mode to OFF. */
+ led_reg &= ~(index << TXGBE_CFG_LED_CTL_LINK_OD_SHIFT);
+ led_reg |= index;
+ wr32(hw, TXGBE_CFG_LED_CTL, led_reg);
+ TXGBE_WRITE_FLUSH(hw);
+ return 0;
+}
+
+/**
+ * txgbe_get_eeprom_semaphore - Get hardware semaphore
+ * @hw: pointer to hardware structure
+ *
+ * Sets the hardware semaphores so EEPROM access can occur for bit-bang method
+ **/
+STATIC s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw)
+{
+ s32 status = TXGBE_ERR_EEPROM;
+ u32 timeout = 2000;
+ u32 i;
+ u32 swsm;
+
+ /* Get SMBI software semaphore between device drivers first */
+ for (i = 0; i < timeout; i++) {
+ /*
+ * If the SMBI bit is 0 when we read it, then the bit will be
+ * set and we have the semaphore
+ */
+ swsm = rd32(hw, TXGBE_MIS_SWSM);
+ if (!(swsm & TXGBE_MIS_SWSM_SMBI)) {
+ status = 0;
+ break;
+ }
+ usec_delay(50);
+ }
+
+ if (i == timeout) {
+ DEBUGOUT("Driver can't access the Eeprom - SMBI Semaphore "
+ "not granted.\n");
+ /*
+ * this release is particularly important because our attempts
+ * above to get the semaphore may have succeeded, and if there
+ * was a timeout, we should unconditionally clear the semaphore
+ * bits to free the driver to make progress
+ */
+ txgbe_release_eeprom_semaphore(hw);
+
+ usec_delay(50);
+ /*
+ * one last try
+ * If the SMBI bit is 0 when we read it, then the bit will be
+ * set and we have the semaphore
+ */
+ swsm = rd32(hw, TXGBE_MIS_SWSM);
+ if (!(swsm & TXGBE_MIS_SWSM_SMBI))
+ status = 0;
+ }
+
+ /* Now get the semaphore between SW/FW through the SWESMBI bit */
+ if (status == 0) {
+ for (i = 0; i < timeout; i++) {
+ if (txgbe_check_mng_access(hw)) {
+ /* Set the SW EEPROM semaphore bit to request access */
+ wr32m(hw, TXGBE_MNG_SW_SM,
+ TXGBE_MNG_SW_SM_SM, TXGBE_MNG_SW_SM_SM);
+
+ /*
+ * If we set the bit successfully then we got
+ * semaphore.
+ */
+ swsm = rd32(hw, TXGBE_MNG_SW_SM);
+ if (swsm & TXGBE_MNG_SW_SM_SM)
+ break;
+ }
+ usec_delay(50);
+ }
+
+ /*
+ * Release semaphores and return error if SW EEPROM semaphore
+ * was not granted because we don't have access to the EEPROM
+ */
+ if (i >= timeout) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "SWESMBI Software EEPROM semaphore not granted.\n");
+ txgbe_release_eeprom_semaphore(hw);
+ status = TXGBE_ERR_EEPROM;
+ }
+ } else {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "Software semaphore SMBI between device drivers "
+ "not granted.\n");
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_release_eeprom_semaphore - Release hardware semaphore
+ * @hw: pointer to hardware structure
+ *
+ * This function clears hardware semaphore bits.
+ **/
+STATIC void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw)
+{
+ if (txgbe_check_mng_access(hw)) {
+ wr32m(hw, TXGBE_MNG_SW_SM,
+ TXGBE_MNG_SW_SM_SM, 0);
+ wr32m(hw, TXGBE_MIS_SWSM,
+ TXGBE_MIS_SWSM_SMBI, 0);
+ TXGBE_WRITE_FLUSH(hw);
+ }
+}
+
+/**
+ * txgbe_validate_mac_addr - Validate MAC address
+ * @mac_addr: pointer to MAC address.
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+s32 txgbe_validate_mac_addr(u8 *mac_addr)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Make sure it is not a multicast address */
+ if (TXGBE_IS_MULTICAST(mac_addr)) {
+ DEBUGOUT("MAC address is multicast\n");
+ status = TXGBE_ERR_INVALID_MAC_ADDR;
+ /* Not a broadcast address */
+ } else if (TXGBE_IS_BROADCAST(mac_addr)) {
+ DEBUGOUT("MAC address is broadcast\n");
+ status = TXGBE_ERR_INVALID_MAC_ADDR;
+ /* Reject the zero address */
+ } else if (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+ mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0) {
+ DEBUGOUT("MAC address is all zeros\n");
+ status = TXGBE_ERR_INVALID_MAC_ADDR;
+ }
+ return status;
+}
+
+/**
+ * txgbe_set_rar - Set Rx address register
+ * @hw: pointer to hardware structure
+ * @index: Receive address register to write
+ * @addr: Address to put into receive address register
+ * @vmdq: VMDq "set" or "pool" index
+ * @enable_addr: set flag that address is active
+ *
+ * Puts an ethernet address into a receive address register.
+ **/
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
+ u32 enable_addr)
+{
+ u32 rar_low, rar_high;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("\n");
+
+ /* Make sure we are using a valid rar index range */
+ if (index >= rar_entries) {
+ ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+ "RAR index %d is out of range.\n", index);
+ return TXGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ /* select the MAC address */
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index);
+
+ /* setup VMDq pool mapping */
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, pools & 0xFFFFFFFF);
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, pools >> 32);
+
+ /*
+ * HW expects these in little endian so we reverse the byte
+ * order from network order (big endian) to little endian
+ *
+ * Some parts put the VMDq setting in the extra RAH bits,
+ * so save everything except the lower 16 bits that hold part
+ * of the address and the address valid bit.
+ */
+ rar_low = ((u32)addr[5] |
+ ((u32)addr[4] << 8) |
+ ((u32)addr[3] << 16) |
+ ((u32)addr[2] << 24));
+ rar_high = ((u32)addr[1] |
+ ((u32)addr[0] << 8));
+ if (enable_addr != 0)
+ rar_high |= TXGBE_PSR_MAC_SWC_AD_H_AV;
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, rar_low);
+ wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H,
+ (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) |
+ TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+ TXGBE_PSR_MAC_SWC_AD_H_AV),
+ rar_high);
+
+ return 0;
+}
+
+/**
+ * txgbe_clear_rar - Remove Rx address register
+ * @hw: pointer to hardware structure
+ * @index: Receive address register to write
+ *
+ * Clears an ethernet address from a receive address register.
+ **/
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index)
+{
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("\n");
+
+ /* Make sure we are using a valid rar index range */
+ if (index >= rar_entries) {
+ ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+ "RAR index %d is out of range.\n", index);
+ return TXGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ /*
+ * Some parts put the VMDq setting in the extra RAH bits,
+ * so save everything except the lower 16 bits that hold part
+ * of the address and the address valid bit.
+ */
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, index);
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0);
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0);
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0);
+ wr32m(hw, TXGBE_PSR_MAC_SWC_AD_H,
+ (TXGBE_PSR_MAC_SWC_AD_H_AD(~0) |
+ TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(~0) |
+ TXGBE_PSR_MAC_SWC_AD_H_AV),
+ 0);
+
+ return 0;
+}
+
+/**
+ * txgbe_init_rx_addrs - Initializes receive address filters.
+ * @hw: pointer to hardware structure
+ *
+ * Places the MAC address in receive address register 0 and clears the rest
+ * of the receive address registers. Clears the multicast table. Assumes
+ * the receiver is in reset when the routine is called.
+ **/
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw)
+{
+ u32 i;
+ u32 rar_entries = hw->mac.num_rar_entries;
+ u32 psrctl;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * If the current mac address is valid, assume it is a software override
+ * to the permanent address.
+ * Otherwise, use the permanent address from the eeprom.
+ */
+ if (txgbe_validate_mac_addr(hw->mac.addr) ==
+ TXGBE_ERR_INVALID_MAC_ADDR) {
+ /* Get the MAC address from the RAR0 for later reference */
+ TCALL(hw, mac.ops.get_mac_addr, hw->mac.addr);
+
+ DEBUGOUT3(" Keeping Current RAR0 Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n",
+ hw->mac.addr[0], hw->mac.addr[1],
+ hw->mac.addr[2], hw->mac.addr[3],
+ hw->mac.addr[4], hw->mac.addr[5]);
+ } else {
+ /* Setup the receive address. */
+ DEBUGOUT("Overriding MAC Address in RAR[0]\n");
+ DEBUGOUT3(" New MAC Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n",
+ hw->mac.addr[0], hw->mac.addr[1],
+ hw->mac.addr[2], hw->mac.addr[3],
+ hw->mac.addr[4], hw->mac.addr[5]);
+
+ TCALL(hw, mac.ops.set_rar, 0, hw->mac.addr, 0,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+
+ /* clear VMDq pool/queue selection for RAR 0 */
+ TCALL(hw, mac.ops.clear_vmdq, 0, TXGBE_CLEAR_VMDQ_ALL);
+ }
+ hw->addr_ctrl.overflow_promisc = 0;
+
+ hw->addr_ctrl.rar_used_count = 1;
+
+ /* Zero out the other receive addresses. */
+ DEBUGOUT1("Clearing RAR[1-%d]\n", rar_entries - 1);
+ for (i = 1; i < rar_entries; i++) {
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, i);
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0);
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_H, 0);
+ }
+
+ /* Clear the MTA */
+ hw->addr_ctrl.mta_in_use = 0;
+ psrctl = rd32(hw, TXGBE_PSR_CTL);
+ psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+ psrctl |= hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT;
+ wr32(hw, TXGBE_PSR_CTL, psrctl);
+ DEBUGOUT(" Clearing MTA\n");
+ for (i = 0; i < hw->mac.mcft_size; i++)
+ wr32(hw, TXGBE_PSR_MC_TBL(i), 0);
+
+ TCALL(hw, mac.ops.init_uta_tables);
+
+ return 0;
+}
+
+/**
+ * txgbe_add_uc_addr - Adds a secondary unicast address.
+ * @hw: pointer to hardware structure
+ * @addr: new address
+ *
+ * Adds it to unused receive address register or goes into promiscuous mode.
+ **/
+void txgbe_add_uc_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq)
+{
+ u32 rar_entries = hw->mac.num_rar_entries;
+ u32 rar;
+
+ DEBUGFUNC("\n");
+
+ DEBUGOUT6(" UC Addr = %.2X %.2X %.2X %.2X %.2X %.2X\n",
+ addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
+
+ /*
+ * Place this address in the RAR if there is room,
+ * else put the controller into promiscuous mode
+ */
+ if (hw->addr_ctrl.rar_used_count < rar_entries) {
+ rar = hw->addr_ctrl.rar_used_count;
+ TCALL(hw, mac.ops.set_rar, rar, addr, vmdq,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+ DEBUGOUT1("Added a secondary address to RAR[%d]\n", rar);
+ hw->addr_ctrl.rar_used_count++;
+ } else {
+ hw->addr_ctrl.overflow_promisc++;
+ }
+
+ DEBUGOUT("txgbe_add_uc_addr Complete\n");
+}
+
+/**
+ * txgbe_update_uc_addr_list - Updates MAC list of secondary addresses
+ * @hw: pointer to hardware structure
+ * @addr_list: the list of new addresses
+ * @addr_count: number of addresses
+ * @next: iterator function to walk the address list
+ *
+ * The given list replaces any existing list. Clears the secondary addrs from
+ * receive address registers. Uses unused receive address registers for the
+ * first secondary addresses, and falls back to promiscuous mode as needed.
+ *
+ * Drivers using secondary unicast addresses must set user_set_promisc when
+ * manually putting the device into promiscuous mode.
+ **/
+s32 txgbe_update_uc_addr_list(struct txgbe_hw *hw, u8 *addr_list,
+ u32 addr_count, txgbe_mc_addr_itr next)
+{
+ u8 *addr;
+ u32 i;
+ u32 old_promisc_setting = hw->addr_ctrl.overflow_promisc;
+ u32 uc_addr_in_use;
+ u32 vmdq;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Clear accounting of old secondary address list,
+ * don't count RAR[0]
+ */
+ uc_addr_in_use = hw->addr_ctrl.rar_used_count - 1;
+ hw->addr_ctrl.rar_used_count -= uc_addr_in_use;
+ hw->addr_ctrl.overflow_promisc = 0;
+
+ /* Zero out the other receive addresses */
+ DEBUGOUT1("Clearing RAR[1-%d]\n", uc_addr_in_use+1);
+ for (i = 0; i < uc_addr_in_use; i++) {
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, 1+i);
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_L, 0);
+ wr32(hw, TXGBE_PSR_MAC_SWC_AD_H, 0);
+ }
+
+ /* Add the new addresses */
+ for (i = 0; i < addr_count; i++) {
+ DEBUGOUT(" Adding the secondary addresses:\n");
+ addr = next(hw, &addr_list, &vmdq);
+ txgbe_add_uc_addr(hw, addr, vmdq);
+ }
+
+ if (hw->addr_ctrl.overflow_promisc) {
+ /* enable promisc if not already in overflow or set by user */
+ if (!old_promisc_setting && !hw->addr_ctrl.user_set_promisc) {
+ DEBUGOUT(" Entering address overflow promisc mode\n");
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_UPE, TXGBE_PSR_CTL_UPE);
+ }
+ } else {
+ /* only disable if set by overflow, not by user */
+ if (old_promisc_setting && !hw->addr_ctrl.user_set_promisc) {
+ DEBUGOUT(" Leaving address overflow promisc mode\n");
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_UPE, 0);
+ }
+ }
+
+ DEBUGOUT("txgbe_update_uc_addr_list Complete\n");
+ return 0;
+}
+
+/**
+ * txgbe_mta_vector - Determines bit-vector in multicast table to set
+ * @hw: pointer to hardware structure
+ * @mc_addr: the multicast address
+ *
+ * Extracts the 12 bits, from a multicast address, to determine which
+ * bit-vector to set in the multicast table. The hardware uses 12 bits, from
+ * incoming rx multicast addresses, to determine the bit-vector to check in
+ * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set
+ * by the MO field of the MCSTCTRL. The MO field is set during initialization
+ * to mc_filter_type.
+ **/
+STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr)
+{
+ u32 vector = 0;
+
+ DEBUGFUNC("\n");
+
+ switch (hw->mac.mc_filter_type) {
+ case 0: /* use bits [47:36] of the address */
+ vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4));
+ break;
+ case 1: /* use bits [46:35] of the address */
+ vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5));
+ break;
+ case 2: /* use bits [45:34] of the address */
+ vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6));
+ break;
+ case 3: /* use bits [43:32] of the address */
+ vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8));
+ break;
+ default: /* Invalid mc_filter_type */
+ DEBUGOUT("MC filter type param set incorrectly\n");
+ ASSERT(0);
+ break;
+ }
+
+ /* vector can only be 12-bits or boundary will be exceeded */
+ vector &= 0xFFF;
+ return vector;
+}
+
+/**
+ * txgbe_set_mta - Set bit-vector in multicast table
+ * @hw: pointer to hardware structure
+ * @hash_value: Multicast address hash value
+ *
+ * Sets the bit-vector in the multicast table.
+ **/
+void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr)
+{
+ u32 vector;
+ u32 vector_bit;
+ u32 vector_reg;
+
+ DEBUGFUNC("\n");
+
+ hw->addr_ctrl.mta_in_use++;
+
+ vector = txgbe_mta_vector(hw, mc_addr);
+ DEBUGOUT1(" bit-vector = 0x%03X\n", vector);
+
+ /*
+ * The MTA is a register array of 128 32-bit registers. It is treated
+ * like an array of 4096 bits. We want to set bit
+ * BitArray[vector_value]. So we figure out what register the bit is
+ * in, read it, OR in the new bit, then write back the new value. The
+ * register is determined by the upper 7 bits of the vector value and
+ * the bit within that register are determined by the lower 5 bits of
+ * the value.
+ */
+ vector_reg = (vector >> 5) & 0x7F;
+ vector_bit = vector & 0x1F;
+ hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+}
+
+/**
+ * txgbe_update_mc_addr_list - Updates MAC list of multicast addresses
+ * @hw: pointer to hardware structure
+ * @mc_addr_list: the list of new multicast addresses
+ * @mc_addr_count: number of addresses
+ * @next: iterator function to walk the multicast address list
+ * @clear: flag, when set clears the table beforehand
+ *
+ * When the clear flag is set, the given list replaces any existing list.
+ * Hashes the given addresses into the multicast table.
+ **/
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+ u32 mc_addr_count, txgbe_mc_addr_itr next,
+ bool clear)
+{
+ u32 i;
+ u32 vmdq;
+ u32 psrctl;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Set the new number of MC addresses that we are being requested to
+ * use.
+ */
+ hw->addr_ctrl.num_mc_addrs = mc_addr_count;
+ hw->addr_ctrl.mta_in_use = 0;
+
+ /* Clear mta_shadow */
+ if (clear) {
+ DEBUGOUT(" Clearing MTA\n");
+ memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow));
+ }
+
+ /* Update mta_shadow */
+ for (i = 0; i < mc_addr_count; i++) {
+ DEBUGOUT(" Adding the multicast addresses:\n");
+ txgbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq));
+ }
+
+ /* Enable mta */
+ for (i = 0; i < hw->mac.mcft_size; i++)
+ wr32a(hw, TXGBE_PSR_MC_TBL(0), i,
+ hw->mac.mta_shadow[i]);
+
+ if (hw->addr_ctrl.mta_in_use > 0) {
+ psrctl = rd32(hw, TXGBE_PSR_CTL);
+ psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+ psrctl |= TXGBE_PSR_CTL_MFE |
+ (hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT);
+ wr32(hw, TXGBE_PSR_CTL, psrctl);
+ }
+
+ DEBUGOUT("txgbe_update_mc_addr_list Complete\n");
+ return 0;
+}
+
+/**
+ * txgbe_enable_mc - Enable multicast address in RAR
+ * @hw: pointer to hardware structure
+ *
+ * Enables multicast address in RAR and the use of the multicast hash table.
+ **/
+s32 txgbe_enable_mc(struct txgbe_hw *hw)
+{
+ struct txgbe_addr_filter_info *a = &hw->addr_ctrl;
+ u32 psrctl;
+
+ DEBUGFUNC("\n");
+
+ if (a->mta_in_use > 0) {
+ psrctl = rd32(hw, TXGBE_PSR_CTL);
+ psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+ psrctl |= TXGBE_PSR_CTL_MFE |
+ (hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT);
+ wr32(hw, TXGBE_PSR_CTL, psrctl);
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_disable_mc - Disable multicast address in RAR
+ * @hw: pointer to hardware structure
+ *
+ * Disables multicast address in RAR and the use of the multicast hash table.
+ **/
+s32 txgbe_disable_mc(struct txgbe_hw *hw)
+{
+ struct txgbe_addr_filter_info *a = &hw->addr_ctrl;
+ u32 psrctl;
+ DEBUGFUNC("\n");
+
+ if (a->mta_in_use > 0) {
+ psrctl = rd32(hw, TXGBE_PSR_CTL);
+ psrctl &= ~(TXGBE_PSR_CTL_MO | TXGBE_PSR_CTL_MFE);
+ psrctl |= hw->mac.mc_filter_type << TXGBE_PSR_CTL_MO_SHIFT;
+ wr32(hw, TXGBE_PSR_CTL, psrctl);
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_fc_enable - Enable flow control
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according to the current settings.
+ **/
+s32 txgbe_fc_enable(struct txgbe_hw *hw)
+{
+ s32 ret_val = 0;
+ u32 mflcn_reg, fccfg_reg;
+ u32 reg;
+ u32 fcrtl, fcrth;
+ int i;
+
+ DEBUGFUNC("\n");
+
+ /* Validate the water mark configuration */
+ if (!hw->fc.pause_time) {
+ ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+
+ /* Low water mark of zero causes XOFF floods */
+ for (i = 0; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++) {
+ if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+ hw->fc.high_water[i]) {
+ if (!hw->fc.low_water[i] ||
+ hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+ DEBUGOUT("Invalid water mark configuration\n");
+ ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+ }
+ }
+
+ /* Negotiate the fc mode to use */
+ txgbe_fc_autoneg(hw);
+
+ /* Disable any previous flow control settings */
+ mflcn_reg = rd32(hw, TXGBE_MAC_RX_FLOW_CTRL);
+ mflcn_reg &= ~(TXGBE_MAC_RX_FLOW_CTRL_PFCE |
+ TXGBE_MAC_RX_FLOW_CTRL_RFE);
+
+ fccfg_reg = rd32(hw, TXGBE_RDB_RFCC);
+ fccfg_reg &= ~(TXGBE_RDB_RFCC_RFCE_802_3X |
+ TXGBE_RDB_RFCC_RFCE_PRIORITY);
+
+ /*
+ * The possible values of fc.current_mode are:
+ * 0: Flow control is completely disabled
+ * 1: Rx flow control is enabled (we can receive pause frames,
+ * but not send pause frames).
+ * 2: Tx flow control is enabled (we can send pause frames but
+ * we do not support receiving pause frames).
+ * 3: Both Rx and Tx flow control (symmetric) are enabled.
+ * other: Invalid.
+ */
+ switch (hw->fc.current_mode) {
+ case txgbe_fc_none:
+ /*
+ * Flow control is disabled by software override or autoneg.
+ * The code below will actually disable it in the HW.
+ */
+ break;
+ case txgbe_fc_rx_pause:
+ /*
+ * Rx Flow control is enabled and Tx Flow control is
+ * disabled by software override. Since there really
+ * isn't a way to advertise that we are capable of RX
+ * Pause ONLY, we will advertise that we support both
+ * symmetric and asymmetric Rx PAUSE. Later, we will
+ * disable the adapter's ability to send PAUSE frames.
+ */
+ mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE;
+ break;
+ case txgbe_fc_tx_pause:
+ /*
+ * Tx Flow control is enabled, and Rx Flow control is
+ * disabled by software override.
+ */
+ fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X;
+ break;
+ case txgbe_fc_full:
+ /* Flow control (both Rx and Tx) is enabled by SW override. */
+ mflcn_reg |= TXGBE_MAC_RX_FLOW_CTRL_RFE;
+ fccfg_reg |= TXGBE_RDB_RFCC_RFCE_802_3X;
+ break;
+ default:
+ ERROR_REPORT1(TXGBE_ERROR_ARGUMENT,
+ "Flow control param set incorrectly\n");
+ ret_val = TXGBE_ERR_CONFIG;
+ goto out;
+ break;
+ }
+
+ /* Set 802.3x based flow control settings. */
+ wr32(hw, TXGBE_MAC_RX_FLOW_CTRL, mflcn_reg);
+ wr32(hw, TXGBE_RDB_RFCC, fccfg_reg);
+
+ /* Set up and enable Rx high/low water mark thresholds, enable XON. */
+ for (i = 0; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++) {
+ if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+ hw->fc.high_water[i]) {
+ fcrtl = (hw->fc.low_water[i] << 10) |
+ TXGBE_RDB_RFCL_XONE;
+ wr32(hw, TXGBE_RDB_RFCL(i), fcrtl);
+ fcrth = (hw->fc.high_water[i] << 10) |
+ TXGBE_RDB_RFCH_XOFFE;
+ } else {
+ wr32(hw, TXGBE_RDB_RFCL(i), 0);
+ /*
+ * In order to prevent Tx hangs when the internal Tx
+ * switch is enabled we must set the high water mark
+ * to the Rx packet buffer size - 24KB. This allows
+ * the Tx switch to function even under heavy Rx
+ * workloads.
+ */
+ fcrth = rd32(hw, TXGBE_RDB_PB_SZ(i)) - 24576;
+ }
+
+ wr32(hw, TXGBE_RDB_RFCH(i), fcrth);
+ }
+
+ /* Configure pause time (2 TCs per register) */
+ reg = hw->fc.pause_time * 0x00010001;
+ for (i = 0; i < (TXGBE_DCB_MAX_TRAFFIC_CLASS / 2); i++)
+ wr32(hw, TXGBE_RDB_RFCV(i), reg);
+
+ /* Configure flow control refresh threshold value */
+ wr32(hw, TXGBE_RDB_RFCRT, hw->fc.pause_time / 2);
+
+out:
+ return ret_val;
+}
+
+/**
+ * txgbe_negotiate_fc - Negotiate flow control
+ * @hw: pointer to hardware structure
+ * @adv_reg: flow control advertised settings
+ * @lp_reg: link partner's flow control settings
+ * @adv_sym: symmetric pause bit in advertisement
+ * @adv_asm: asymmetric pause bit in advertisement
+ * @lp_sym: symmetric pause bit in link partner advertisement
+ * @lp_asm: asymmetric pause bit in link partner advertisement
+ *
+ * Find the intersection between advertised settings and link partner's
+ * advertised settings
+ **/
+STATIC s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+ u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+{
+ if ((!(adv_reg)) || (!(lp_reg))) {
+ ERROR_REPORT3(TXGBE_ERROR_UNSUPPORTED,
+ "Local or link partner's advertised flow control "
+ "settings are NULL. Local: %x, link partner: %x\n",
+ adv_reg, lp_reg);
+ return TXGBE_ERR_FC_NOT_NEGOTIATED;
+ }
+
+ if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+ /*
+ * Now we need to check if the user selected Rx ONLY
+ * of pause frames. In this case, we had to advertise
+ * FULL flow control because we could not advertise RX
+ * ONLY. Hence, we must now check to see if we need to
+ * turn OFF the TRANSMISSION of PAUSE frames.
+ */
+ if (hw->fc.requested_mode == txgbe_fc_full) {
+ hw->fc.current_mode = txgbe_fc_full;
+ DEBUGOUT("Flow Control = FULL.\n");
+ } else {
+ hw->fc.current_mode = txgbe_fc_rx_pause;
+ DEBUGOUT("Flow Control=RX PAUSE frames only\n");
+ }
+ } else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+ (lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+ hw->fc.current_mode = txgbe_fc_tx_pause;
+ DEBUGOUT("Flow Control = TX PAUSE frames only.\n");
+ } else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+ !(lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+ hw->fc.current_mode = txgbe_fc_rx_pause;
+ DEBUGOUT("Flow Control = RX PAUSE frames only.\n");
+ } else {
+ hw->fc.current_mode = txgbe_fc_none;
+ DEBUGOUT("Flow Control = NONE.\n");
+ }
+ return 0;
+}
+
+/**
+ * txgbe_fc_autoneg_fiber - Enable flow control on 1 gig fiber
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according on 1 gig fiber.
+ **/
+STATIC s32 txgbe_fc_autoneg_fiber(struct txgbe_hw *hw)
+{
+ u32 pcs_anadv_reg, pcs_lpab_reg;
+ s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+ pcs_anadv_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_AN_ADV);
+ pcs_lpab_reg = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_LP_BABL);
+
+ ret_val = txgbe_negotiate_fc(hw, pcs_anadv_reg,
+ pcs_lpab_reg,
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM,
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM,
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM,
+ TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM);
+
+ return ret_val;
+}
+
+/**
+ * txgbe_fc_autoneg_backplane - Enable flow control IEEE clause 37
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according to IEEE clause 37.
+ **/
+STATIC s32 txgbe_fc_autoneg_backplane(struct txgbe_hw *hw)
+{
+ u32 anlp1_reg, autoc_reg;
+ s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+ /*
+ * Read the 10g AN autoc and LP ability registers and resolve
+ * local flow control settings accordingly
+ */
+ autoc_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG1);
+ anlp1_reg = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_LP_ABL1);
+
+ ret_val = txgbe_negotiate_fc(hw, autoc_reg,
+ anlp1_reg, TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+ TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM,
+ TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+ TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM);
+
+ return ret_val;
+}
+
+/**
+ * txgbe_fc_autoneg_copper - Enable flow control IEEE clause 37
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according to IEEE clause 37.
+ **/
+STATIC s32 txgbe_fc_autoneg_copper(struct txgbe_hw *hw)
+{
+ u8 technology_ability_reg = 0;
+ u8 lp_technology_ability_reg = 0;
+
+ txgbe_get_phy_advertised_pause(hw, &technology_ability_reg);
+ txgbe_get_lp_advertised_pause(hw, &lp_technology_ability_reg);
+
+ return txgbe_negotiate_fc(hw, (u32)technology_ability_reg,
+ (u32)lp_technology_ability_reg,
+ TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE,
+ TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE);
+}
+
+/**
+ * txgbe_fc_autoneg - Configure flow control
+ * @hw: pointer to hardware structure
+ *
+ * Compares our advertised flow control capabilities to those advertised by
+ * our link partner, and determines the proper flow control mode to use.
+ **/
+void txgbe_fc_autoneg(struct txgbe_hw *hw)
+{
+ s32 ret_val = TXGBE_ERR_FC_NOT_NEGOTIATED;
+ u32 speed;
+ bool link_up;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * AN should have completed when the cable was plugged in.
+ * Look for reasons to bail out. Bail out if:
+ * - FC autoneg is disabled, or if
+ * - link is not up.
+ */
+ if (hw->fc.disable_fc_autoneg) {
+ ERROR_REPORT1(TXGBE_ERROR_UNSUPPORTED,
+ "Flow control autoneg is disabled");
+ goto out;
+ }
+
+ TCALL(hw, mac.ops.check_link, &speed, &link_up, false);
+ if (!link_up) {
+ ERROR_REPORT1(TXGBE_ERROR_SOFTWARE, "The link is down");
+ goto out;
+ }
+
+ switch (hw->phy.media_type) {
+ /* Autoneg flow control on fiber adapters */
+ case txgbe_media_type_fiber:
+ if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+ ret_val = txgbe_fc_autoneg_fiber(hw);
+ break;
+
+ /* Autoneg flow control on backplane adapters */
+ case txgbe_media_type_backplane:
+ ret_val = txgbe_fc_autoneg_backplane(hw);
+ break;
+
+ /* Autoneg flow control on copper adapters */
+ case txgbe_media_type_copper:
+ if (txgbe_device_supports_autoneg_fc(hw))
+ ret_val = txgbe_fc_autoneg_copper(hw);
+ break;
+
+ default:
+ break;
+ }
+
+out:
+ if (ret_val == 0) {
+ hw->fc.fc_was_autonegged = true;
+ } else {
+ hw->fc.fc_was_autonegged = false;
+ hw->fc.current_mode = hw->fc.requested_mode;
+ }
+}
+
+/**
+ * txgbe_disable_pcie_master - Disable PCI-express master access
+ * @hw: pointer to hardware structure
+ *
+ * Disables PCI-Express master access and verifies there are no pending
+ * requests. TXGBE_ERR_MASTER_REQUESTS_PENDING is returned if master disable
+ * bit hasn't caused the master requests to be disabled, else 0
+ * is returned signifying master requests disabled.
+ **/
+s32 txgbe_disable_pcie_master(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+ u32 i;
+ struct txgbe_adapter *adapter = hw->back;
+ unsigned int num_vfs = adapter->num_vfs;
+ u16 dev_ctl;
+ u32 vf_bme_clear = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Always set this bit to ensure any future transactions are blocked */
+ pci_clear_master(((struct txgbe_adapter *)hw->back)->pdev);
+
+ /* Exit if master requests are blocked */
+ if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING)) ||
+ TXGBE_REMOVED(hw->hw_addr))
+ goto out;
+
+ /* BME disable handshake will not be finished if any VF BME is 0 */
+ for (i = 0; i < num_vfs; i++) {
+ struct pci_dev *vfdev = adapter->vfinfo[i].vfdev;
+ if (!vfdev)
+ continue;
+ pci_read_config_word(vfdev, 0x4, &dev_ctl);
+ if ((dev_ctl & 0x4) == 0) {
+ vf_bme_clear = 1;
+ break;
+ }
+ }
+
+ /* Poll for master request bit to clear */
+ for (i = 0; i < TXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) {
+ usec_delay(100);
+ if (!(rd32(hw, TXGBE_PX_TRANSACTION_PENDING)))
+ goto out;
+ }
+
+ if (!vf_bme_clear) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "PCIe transaction pending bit did not clear.\n");
+ status = TXGBE_ERR_MASTER_REQUESTS_PENDING;
+ }
+
+out:
+ return status;
+}
+
+
+/**
+ * txgbe_acquire_swfw_sync - Acquire SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to acquire
+ *
+ * Acquires the SWFW semaphore through the GSSR register for the specified
+ * function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+ u32 gssr = 0;
+ u32 swmask = mask;
+ u32 fwmask = mask << 16;
+ u32 timeout = 200;
+ u32 i;
+
+ for (i = 0; i < timeout; i++) {
+ /*
+ * SW NVM semaphore bit is used for access to all
+ * SW_FW_SYNC bits (not just NVM)
+ */
+ if (txgbe_get_eeprom_semaphore(hw))
+ return TXGBE_ERR_SWFW_SYNC;
+
+ if (txgbe_check_mng_access(hw)) {
+ gssr = rd32(hw, TXGBE_MNG_SWFW_SYNC);
+ if (!(gssr & (fwmask | swmask))) {
+ gssr |= swmask;
+ wr32(hw, TXGBE_MNG_SWFW_SYNC, gssr);
+ txgbe_release_eeprom_semaphore(hw);
+ return 0;
+ } else {
+ /* Resource is currently in use by FW or SW */
+ txgbe_release_eeprom_semaphore(hw);
+ msec_delay(5);
+ }
+ }
+ }
+
+ /* If time expired clear the bits holding the lock and retry */
+ if (gssr & (fwmask | swmask))
+ txgbe_release_swfw_sync(hw, gssr & (fwmask | swmask));
+
+ msec_delay(5);
+ return TXGBE_ERR_SWFW_SYNC;
+}
+
+/**
+ * txgbe_release_swfw_sync - Release SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to release
+ *
+ * Releases the SWFW semaphore through the GSSR register for the specified
+ * function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+ txgbe_get_eeprom_semaphore(hw);
+ if (txgbe_check_mng_access(hw))
+ wr32m(hw, TXGBE_MNG_SWFW_SYNC, mask, 0);
+
+ txgbe_release_eeprom_semaphore(hw);
+}
+
+/**
+ * txgbe_disable_sec_rx_path - Stops the receive data path
+ * @hw: pointer to hardware structure
+ *
+ * Stops the receive data path and waits for the HW to internally empty
+ * the Rx security block
+ **/
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw)
+{
+#define TXGBE_MAX_SECRX_POLL 40
+
+ int i;
+ int secrxreg;
+
+ DEBUGFUNC("\n");
+
+ wr32m(hw, TXGBE_RSC_CTL,
+ TXGBE_RSC_CTL_RX_DIS, TXGBE_RSC_CTL_RX_DIS);
+ for (i = 0; i < TXGBE_MAX_SECRX_POLL; i++) {
+ secrxreg = rd32(hw, TXGBE_RSC_ST);
+ if (secrxreg & TXGBE_RSC_ST_RSEC_RDY)
+ break;
+ else
+ /* Use interrupt-safe sleep just in case */
+ usec_delay(1000);
+ }
+
+ /* For informational purposes only */
+ if (i >= TXGBE_MAX_SECRX_POLL)
+ DEBUGOUT("Rx unit being enabled before security "
+ "path fully disabled. Continuing with init.\n");
+
+ return 0;
+}
+
+/**
+ * txgbe_enable_sec_rx_path - Enables the receive data path
+ * @hw: pointer to hardware structure
+ *
+ * Enables the receive data path.
+ **/
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw)
+{
+ DEBUGFUNC("\n");
+
+ wr32m(hw, TXGBE_RSC_CTL,
+ TXGBE_RSC_CTL_RX_DIS, 0);
+ TXGBE_WRITE_FLUSH(hw);
+
+ return 0;
+}
+
+/**
+ * txgbe_get_san_mac_addr_offset - Get SAN MAC address offset from the EEPROM
+ * @hw: pointer to hardware structure
+ * @san_mac_offset: SAN MAC address offset
+ *
+ * This function will read the EEPROM location for the SAN MAC address
+ * pointer, and returns the value at that location. This is used in both
+ * get and set mac_addr routines.
+ **/
+STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
+ u16 *san_mac_offset)
+{
+ s32 ret_val;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * First read the EEPROM pointer to see if the MAC addresses are
+ * available.
+ */
+ ret_val = TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_SAN_MAC_ADDR_PTR,
+ san_mac_offset);
+ if (ret_val) {
+ ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+ "eeprom at offset %d failed",
+ TXGBE_SAN_MAC_ADDR_PTR);
+ }
+
+ return ret_val;
+}
+
+/**
+ * txgbe_get_san_mac_addr - SAN MAC address retrieval from the EEPROM
+ * @hw: pointer to hardware structure
+ * @san_mac_addr: SAN MAC address
+ *
+ * Reads the SAN MAC address from the EEPROM, if it's available. This is
+ * per-port, so set_lan_id() must be called before reading the addresses.
+ * set_lan_id() is called by identify_sfp(), but this cannot be relied
+ * upon for non-SFP connections, so we must call it here.
+ **/
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
+{
+ u16 san_mac_data, san_mac_offset;
+ u8 i;
+ s32 ret_val;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * First read the EEPROM pointer to see if the MAC addresses are
+ * available. If they're not, no point in calling set_lan_id() here.
+ */
+ ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset);
+ if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF)
+ goto san_mac_addr_out;
+
+ /* apply the port offset to the address offset */
+ (hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) :
+ (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET);
+ for (i = 0; i < 3; i++) {
+ ret_val = TCALL(hw, eeprom.ops.read, san_mac_offset,
+ &san_mac_data);
+ if (ret_val) {
+ ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+ "eeprom read at offset %d failed",
+ san_mac_offset);
+ goto san_mac_addr_out;
+ }
+ san_mac_addr[i * 2] = (u8)(san_mac_data);
+ san_mac_addr[i * 2 + 1] = (u8)(san_mac_data >> 8);
+ san_mac_offset++;
+ }
+ return 0;
+
+san_mac_addr_out:
+ /*
+ * No addresses available in this EEPROM. It's not an
+ * error though, so just wipe the local address and return.
+ */
+ for (i = 0; i < 6; i++)
+ san_mac_addr[i] = 0xFF;
+ return 0;
+}
+
+/**
+ * txgbe_set_san_mac_addr - Write the SAN MAC address to the EEPROM
+ * @hw: pointer to hardware structure
+ * @san_mac_addr: SAN MAC address
+ *
+ * Write a SAN MAC address to the EEPROM.
+ **/
+s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
+{
+ s32 ret_val;
+ u16 san_mac_data, san_mac_offset;
+ u8 i;
+
+ DEBUGFUNC("\n");
+
+ /* Look for SAN mac address pointer. If not defined, return */
+ ret_val = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset);
+ if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF)
+ return TXGBE_ERR_NO_SAN_ADDR_PTR;
+
+ /* Apply the port offset to the address offset */
+ (hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) :
+ (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET);
+
+ for (i = 0; i < 3; i++) {
+ san_mac_data = (u16)((u16)(san_mac_addr[i * 2 + 1]) << 8);
+ san_mac_data |= (u16)(san_mac_addr[i * 2]);
+ TCALL(hw, eeprom.ops.write, san_mac_offset, san_mac_data);
+ san_mac_offset++;
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_insert_mac_addr - Find a RAR for this mac address
+ * @hw: pointer to hardware structure
+ * @addr: Address to put into receive address register
+ * @vmdq: VMDq pool to assign
+ *
+ * Puts an ethernet address into a receive address register, or
+ * finds the rar that it is aleady in; adds to the pool list
+ **/
+s32 txgbe_insert_mac_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq)
+{
+ static const u32 NO_EMPTY_RAR_FOUND = 0xFFFFFFFF;
+ u32 first_empty_rar = NO_EMPTY_RAR_FOUND;
+ u32 rar;
+ u32 rar_low, rar_high;
+ u32 addr_low, addr_high;
+
+ DEBUGFUNC("\n");
+
+ /* swap bytes for HW little endian */
+ addr_low = addr[5] | (addr[4] << 8)
+ | (addr[3] << 16)
+ | (addr[2] << 24);
+ addr_high = addr[1] | (addr[0] << 8);
+
+ /*
+ * Either find the mac_id in rar or find the first empty space.
+ * rar_highwater points to just after the highest currently used
+ * rar in order to shorten the search. It grows when we add a new
+ * rar to the top.
+ */
+ for (rar = 0; rar < hw->mac.rar_highwater; rar++) {
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar);
+ rar_high = rd32(hw, TXGBE_PSR_MAC_SWC_AD_H);
+
+ if (((TXGBE_PSR_MAC_SWC_AD_H_AV & rar_high) == 0)
+ && first_empty_rar == NO_EMPTY_RAR_FOUND) {
+ first_empty_rar = rar;
+ } else if ((rar_high & 0xFFFF) == addr_high) {
+ rar_low = rd32(hw, TXGBE_PSR_MAC_SWC_AD_L);
+ if (rar_low == addr_low)
+ break; /* found it already in the rars */
+ }
+ }
+
+ if (rar < hw->mac.rar_highwater) {
+ /* already there so just add to the pool bits */
+ TCALL(hw, mac.ops.set_vmdq, rar, vmdq);
+ } else if (first_empty_rar != NO_EMPTY_RAR_FOUND) {
+ /* stick it into first empty RAR slot we found */
+ rar = first_empty_rar;
+ TCALL(hw, mac.ops.set_rar, rar, addr, vmdq,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+ } else if (rar == hw->mac.rar_highwater) {
+ /* add it to the top of the list and inc the highwater mark */
+ TCALL(hw, mac.ops.set_rar, rar, addr, vmdq,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+ hw->mac.rar_highwater++;
+ } else if (rar >= hw->mac.num_rar_entries) {
+ return TXGBE_ERR_INVALID_MAC_ADDR;
+ }
+
+ /*
+ * If we found rar[0], make sure the default pool bit (we use pool 0)
+ * remains cleared to be sure default pool packets will get delivered
+ */
+ if (rar == 0)
+ TCALL(hw, mac.ops.clear_vmdq, rar, 0);
+
+ return rar;
+}
+
+/**
+ * txgbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
+ * @hw: pointer to hardware struct
+ * @rar: receive address register index to disassociate
+ * @vmdq: VMDq pool index to remove from the rar
+ **/
+s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq)
+{
+ u32 mpsar_lo, mpsar_hi;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("\n");
+ UNREFERENCED_PARAMETER(vmdq);
+
+ /* Make sure we are using a valid rar index range */
+ if (rar >= rar_entries) {
+ ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+ "RAR index %d is out of range.\n", rar);
+ return TXGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar);
+ mpsar_lo = rd32(hw, TXGBE_PSR_MAC_SWC_VM_L);
+ mpsar_hi = rd32(hw, TXGBE_PSR_MAC_SWC_VM_H);
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ goto done;
+
+ if (!mpsar_lo && !mpsar_hi)
+ goto done;
+
+ /* was that the last pool using this rar? */
+ if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0)
+ TCALL(hw, mac.ops.clear_rar, rar);
+done:
+ return 0;
+}
+
+/**
+ * txgbe_set_vmdq - Associate a VMDq pool index with a rx address
+ * @hw: pointer to hardware struct
+ * @rar: receive address register index to associate with a VMDq index
+ * @vmdq: VMDq pool index
+ **/
+s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 pool)
+{
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("\n");
+ UNREFERENCED_PARAMETER(pool);
+
+ /* Make sure we are using a valid rar index range */
+ if (rar >= rar_entries) {
+ ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+ "RAR index %d is out of range.\n", rar);
+ return TXGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ return 0;
+}
+
+/**
+ * This function should only be involved in the IOV mode.
+ * In IOV mode, Default pool is next pool after the number of
+ * VFs advertized and not 0.
+ * MPSAR table needs to be updated for SAN_MAC RAR [hw->mac.san_mac_rar_index]
+ *
+ * txgbe_set_vmdq_san_mac - Associate default VMDq pool index with a rx address
+ * @hw: pointer to hardware struct
+ * @vmdq: VMDq pool index
+ **/
+s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq)
+{
+ u32 rar = hw->mac.san_mac_rar_index;
+
+ DEBUGFUNC("\n");
+
+ wr32(hw, TXGBE_PSR_MAC_SWC_IDX, rar);
+ if (vmdq < 32) {
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 1 << vmdq);
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 0);
+ } else {
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_L, 0);
+ wr32(hw, TXGBE_PSR_MAC_SWC_VM_H, 1 << (vmdq - 32));
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_init_uta_tables - Initialize the Unicast Table Array
+ * @hw: pointer to hardware structure
+ **/
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw)
+{
+ int i;
+
+ DEBUGFUNC("\n");
+ DEBUGOUT(" Clearing UTA\n");
+
+ for (i = 0; i < 128; i++)
+ wr32(hw, TXGBE_PSR_UC_TBL(i), 0);
+
+ return 0;
+}
+
+/**
+ * txgbe_find_vlvf_slot - find the vlanid or the first empty slot
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ *
+ * return the VLVF index where this VLAN id should be placed
+ *
+ **/
+s32 txgbe_find_vlvf_slot(struct txgbe_hw *hw, u32 vlan)
+{
+ u32 bits = 0;
+ u32 first_empty_slot = 0;
+ s32 regindex;
+
+ /* short cut the special case */
+ if (vlan == 0)
+ return 0;
+
+ /*
+ * Search for the vlan id in the VLVF entries. Save off the first empty
+ * slot found along the way
+ */
+ for (regindex = 1; regindex < TXGBE_PSR_VLAN_SWC_ENTRIES; regindex++) {
+ wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, regindex);
+ bits = rd32(hw, TXGBE_PSR_VLAN_SWC);
+ if (!bits && !(first_empty_slot))
+ first_empty_slot = regindex;
+ else if ((bits & 0x0FFF) == vlan)
+ break;
+ }
+
+ /*
+ * If regindex is less than TXGBE_VLVF_ENTRIES, then we found the vlan
+ * in the VLVF. Else use the first empty VLVF register for this
+ * vlan id.
+ */
+ if (regindex >= TXGBE_PSR_VLAN_SWC_ENTRIES) {
+ if (first_empty_slot)
+ regindex = first_empty_slot;
+ else {
+ ERROR_REPORT1(TXGBE_ERROR_SOFTWARE,
+ "No space in VLVF.\n");
+ regindex = TXGBE_ERR_NO_SPACE;
+ }
+ }
+
+ return regindex;
+}
+
+/**
+ * txgbe_set_vfta - Set VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ * @vind: VMDq output index that maps queue to VLAN id in VFVFB
+ * @vlan_on: boolean flag to turn on/off VLAN in VFVF
+ *
+ * Turn on/off specified VLAN in the VLAN filter table.
+ **/
+s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on)
+{
+ s32 regindex;
+ u32 bitindex;
+ u32 vfta;
+ u32 targetbit;
+ s32 ret_val = 0;
+ bool vfta_changed = false;
+
+ DEBUGFUNC("\n");
+
+ if (vlan > 4095)
+ return TXGBE_ERR_PARAM;
+
+ /*
+ * this is a 2 part operation - first the VFTA, then the
+ * VLVF and VLVFB if VT Mode is set
+ * We don't write the VFTA until we know the VLVF part succeeded.
+ */
+
+ /* Part 1
+ * The VFTA is a bitstring made up of 128 32-bit registers
+ * that enable the particular VLAN id, much like the MTA:
+ * bits[11-5]: which register
+ * bits[4-0]: which bit in the register
+ */
+ regindex = (vlan >> 5) & 0x7F;
+ bitindex = vlan & 0x1F;
+ targetbit = (1 << bitindex);
+ /* errata 5 */
+ vfta = hw->mac.vft_shadow[regindex];
+ if (vlan_on) {
+ if (!(vfta & targetbit)) {
+ vfta |= targetbit;
+ vfta_changed = true;
+ }
+ } else {
+ if ((vfta & targetbit)) {
+ vfta &= ~targetbit;
+ vfta_changed = true;
+ }
+ }
+
+ /* Part 2
+ * Call txgbe_set_vlvf to set VLVFB and VLVF
+ */
+ ret_val = txgbe_set_vlvf(hw, vlan, vind, vlan_on,
+ &vfta_changed);
+ if (ret_val != 0)
+ return ret_val;
+
+ if (vfta_changed)
+ wr32(hw, TXGBE_PSR_VLAN_TBL(regindex), vfta);
+ /* errata 5 */
+ hw->mac.vft_shadow[regindex] = vfta;
+ return 0;
+}
+
+/**
+ * txgbe_set_vlvf - Set VLAN Pool Filter
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ * @vind: VMDq output index that maps queue to VLAN id in VFVFB
+ * @vlan_on: boolean flag to turn on/off VLAN in VFVF
+ * @vfta_changed: pointer to boolean flag which indicates whether VFTA
+ * should be changed
+ *
+ * Turn on/off specified bit in VLVF table.
+ **/
+s32 txgbe_set_vlvf(struct txgbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, bool *vfta_changed)
+{
+ u32 vt;
+
+ DEBUGFUNC("\n");
+
+ if (vlan > 4095)
+ return TXGBE_ERR_PARAM;
+
+ /* If VT Mode is set
+ * Either vlan_on
+ * make sure the vlan is in VLVF
+ * set the vind bit in the matching VLVFB
+ * Or !vlan_on
+ * clear the pool bit and possibly the vind
+ */
+ vt = rd32(hw, TXGBE_CFG_PORT_CTL);
+ if (vt & TXGBE_CFG_PORT_CTL_NUM_VT_MASK) {
+ s32 vlvf_index;
+ u32 bits;
+
+ vlvf_index = txgbe_find_vlvf_slot(hw, vlan);
+ if (vlvf_index < 0)
+ return vlvf_index;
+
+ wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, vlvf_index);
+ if (vlan_on) {
+ /* set the pool bit */
+ if (vind < 32) {
+ bits = rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_L);
+ bits |= (1 << vind);
+ wr32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_L,
+ bits);
+ } else {
+ bits = rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_H);
+ bits |= (1 << (vind - 32));
+ wr32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_H,
+ bits);
+ }
+ } else {
+ /* clear the pool bit */
+ if (vind < 32) {
+ bits = rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_L);
+ bits &= ~(1 << vind);
+ wr32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_L,
+ bits);
+ bits |= rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_H);
+ } else {
+ bits = rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_H);
+ bits &= ~(1 << (vind - 32));
+ wr32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_H,
+ bits);
+ bits |= rd32(hw,
+ TXGBE_PSR_VLAN_SWC_VM_L);
+ }
+ }
+
+ /*
+ * If there are still bits set in the VLVFB registers
+ * for the VLAN ID indicated we need to see if the
+ * caller is requesting that we clear the VFTA entry bit.
+ * If the caller has requested that we clear the VFTA
+ * entry bit but there are still pools/VFs using this VLAN
+ * ID entry then ignore the request. We're not worried
+ * about the case where we're turning the VFTA VLAN ID
+ * entry bit on, only when requested to turn it off as
+ * there may be multiple pools and/or VFs using the
+ * VLAN ID entry. In that case we cannot clear the
+ * VFTA bit until all pools/VFs using that VLAN ID have also
+ * been cleared. This will be indicated by "bits" being
+ * zero.
+ */
+ if (bits) {
+ wr32(hw, TXGBE_PSR_VLAN_SWC,
+ (TXGBE_PSR_VLAN_SWC_VIEN | vlan));
+ if ((!vlan_on) && (vfta_changed != NULL)) {
+ /* someone wants to clear the vfta entry
+ * but some pools/VFs are still using it.
+ * Ignore it. */
+ *vfta_changed = false;
+ }
+ } else
+ wr32(hw, TXGBE_PSR_VLAN_SWC, 0);
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_clear_vfta - Clear VLAN filter table
+ * @hw: pointer to hardware structure
+ *
+ * Clears the VLAN filer table, and the VMDq index associated with the filter
+ **/
+s32 txgbe_clear_vfta(struct txgbe_hw *hw)
+{
+ u32 offset;
+
+ DEBUGFUNC("\n");
+
+ for (offset = 0; offset < hw->mac.vft_size; offset++) {
+ wr32(hw, TXGBE_PSR_VLAN_TBL(offset), 0);
+ /* errata 5 */
+ hw->mac.vft_shadow[offset] = 0;
+ }
+
+ for (offset = 0; offset < TXGBE_PSR_VLAN_SWC_ENTRIES; offset++) {
+ wr32(hw, TXGBE_PSR_VLAN_SWC_IDX, offset);
+ wr32(hw, TXGBE_PSR_VLAN_SWC, 0);
+ wr32(hw, TXGBE_PSR_VLAN_SWC_VM_L, 0);
+ wr32(hw, TXGBE_PSR_VLAN_SWC_VM_H, 0);
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_get_wwn_prefix - Get alternative WWNN/WWPN prefix from
+ * the EEPROM
+ * @hw: pointer to hardware structure
+ * @wwnn_prefix: the alternative WWNN prefix
+ * @wwpn_prefix: the alternative WWPN prefix
+ *
+ * This function will read the EEPROM from the alternative SAN MAC address
+ * block to check the support for the alternative WWNN/WWPN prefix support.
+ **/
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+ u16 *wwpn_prefix)
+{
+ u16 offset, caps;
+ u16 alt_san_mac_blk_offset;
+
+ DEBUGFUNC("\n");
+
+ /* clear output first */
+ *wwnn_prefix = 0xFFFF;
+ *wwpn_prefix = 0xFFFF;
+
+ /* check if alternative SAN MAC is supported */
+ offset = hw->eeprom.sw_region_offset + TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR;
+ if (TCALL(hw, eeprom.ops.read, offset, &alt_san_mac_blk_offset))
+ goto wwn_prefix_err;
+
+ if ((alt_san_mac_blk_offset == 0) ||
+ (alt_san_mac_blk_offset == 0xFFFF))
+ goto wwn_prefix_out;
+
+ /* check capability in alternative san mac address block */
+ offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET;
+ if (TCALL(hw, eeprom.ops.read, offset, &caps))
+ goto wwn_prefix_err;
+ if (!(caps & TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN))
+ goto wwn_prefix_out;
+
+ /* get the corresponding prefix for WWNN/WWPN */
+ offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET;
+ if (TCALL(hw, eeprom.ops.read, offset, wwnn_prefix)) {
+ ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+ "eeprom read at offset %d failed", offset);
+ }
+
+ offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET;
+ if (TCALL(hw, eeprom.ops.read, offset, wwpn_prefix))
+ goto wwn_prefix_err;
+
+wwn_prefix_out:
+ return 0;
+
+wwn_prefix_err:
+ ERROR_REPORT2(TXGBE_ERROR_INVALID_STATE,
+ "eeprom read at offset %d failed", offset);
+ return 0;
+}
+
+
+/**
+ * txgbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing
+ * @hw: pointer to hardware structure
+ * @enable: enable or disable switch for anti-spoofing
+ * @pf: Physical Function pool - do not enable anti-spoofing for the PF
+ *
+ **/
+void txgbe_set_mac_anti_spoofing(struct txgbe_hw *hw, bool enable, int pf)
+{
+ u64 pfvfspoof = 0;
+
+ DEBUGFUNC("\n");
+
+ if (enable) {
+ /*
+ * The PF should be allowed to spoof so that it can support
+ * emulation mode NICs. Do not set the bits assigned to the PF
+ * Remaining pools belong to the PF so they do not need to have
+ * anti-spoofing enabled.
+ */
+ pfvfspoof = (1 << pf) - 1;
+ wr32(hw, TXGBE_TDM_MAC_AS_L,
+ pfvfspoof & 0xffffffff);
+ wr32(hw, TXGBE_TDM_MAC_AS_H, pfvfspoof >> 32);
+ } else {
+ wr32(hw, TXGBE_TDM_MAC_AS_L, 0);
+ wr32(hw, TXGBE_TDM_MAC_AS_H, 0);
+ }
+}
+
+/**
+ * txgbe_set_vlan_anti_spoofing - Enable/Disable VLAN anti-spoofing
+ * @hw: pointer to hardware structure
+ * @enable: enable or disable switch for VLAN anti-spoofing
+ * @vf: Virtual Function pool - VF Pool to set for VLAN anti-spoofing
+ *
+ **/
+void txgbe_set_vlan_anti_spoofing(struct txgbe_hw *hw, bool enable, int vf)
+{
+ u32 pfvfspoof;
+
+ DEBUGFUNC("\n");
+
+ if (vf < 32) {
+ pfvfspoof = rd32(hw, TXGBE_TDM_VLAN_AS_L);
+ if (enable)
+ pfvfspoof |= (1 << vf);
+ else
+ pfvfspoof &= ~(1 << vf);
+ wr32(hw, TXGBE_TDM_VLAN_AS_L, pfvfspoof);
+ } else {
+ pfvfspoof = rd32(hw, TXGBE_TDM_VLAN_AS_H);
+ if (enable)
+ pfvfspoof |= (1 << (vf - 32));
+ else
+ pfvfspoof &= ~(1 << (vf - 32));
+ wr32(hw, TXGBE_TDM_VLAN_AS_H, pfvfspoof);
+ }
+}
+
+/**
+ * txgbe_set_ethertype_anti_spoofing - Enable/Disable Ethertype anti-spoofing
+ * @hw: pointer to hardware structure
+ * @enable: enable or disable switch for Ethertype anti-spoofing
+ * @vf: Virtual Function pool - VF Pool to set for Ethertype anti-spoofing
+ *
+ **/
+void txgbe_set_ethertype_anti_spoofing(struct txgbe_hw *hw,
+ bool enable, int vf)
+{
+ u32 pfvfspoof;
+
+ DEBUGFUNC("\n");
+
+ if (vf < 32) {
+ pfvfspoof = rd32(hw, TXGBE_TDM_ETYPE_AS_L);
+ if (enable)
+ pfvfspoof |= (1 << vf);
+ else
+ pfvfspoof &= ~(1 << vf);
+ wr32(hw, TXGBE_TDM_ETYPE_AS_L, pfvfspoof);
+ } else {
+ pfvfspoof = rd32(hw, TXGBE_TDM_ETYPE_AS_H);
+ if (enable)
+ pfvfspoof |= (1 << (vf - 32));
+ else
+ pfvfspoof &= ~(1 << (vf - 32));
+ wr32(hw, TXGBE_TDM_ETYPE_AS_H, pfvfspoof);
+ }
+}
+
+/**
+ * txgbe_get_device_caps - Get additional device capabilities
+ * @hw: pointer to hardware structure
+ * @device_caps: the EEPROM word with the extra device capabilities
+ *
+ * This function will read the EEPROM location for the device capabilities,
+ * and return the word through device_caps.
+ **/
+s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps)
+{
+ DEBUGFUNC("\n");
+
+ TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS, device_caps);
+
+ return 0;
+}
+
+/**
+ * txgbe_calculate_checksum - Calculate checksum for buffer
+ * @buffer: pointer to EEPROM
+ * @length: size of EEPROM to calculate a checksum for
+ * Calculates the checksum for some buffer on a specified length. The
+ * checksum calculated is returned.
+ **/
+u8 txgbe_calculate_checksum(u8 *buffer, u32 length)
+{
+ u32 i;
+ u8 sum = 0;
+
+ DEBUGFUNC("\n");
+
+ if (!buffer)
+ return 0;
+
+ for (i = 0; i < length; i++)
+ sum += buffer[i];
+
+ return (u8) (0 - sum);
+}
+
+/**
+ * txgbe_host_interface_command - Issue command to manageability block
+ * @hw: pointer to the HW structure
+ * @buffer: contains the command to write and where the return status will
+ * be placed
+ * @length: length of buffer, must be multiple of 4 bytes
+ * @timeout: time in ms to wait for command completion
+ * @return_data: read and return data from the buffer (true) or not (false)
+ * Needed because FW structures are big endian and decoding of
+ * these fields can be 8 bit or 16 bit based on command. Decoding
+ * is not easily understood without making a table of commands.
+ * So we will leave this up to the caller to read back the data
+ * in these cases.
+ *
+ * Communicates with the manageability block. On success return 0
+ * else return TXGBE_ERR_HOST_INTERFACE_COMMAND.
+ **/
+s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
+ u32 length, u32 timeout, bool return_data)
+{
+ u32 hicr, i, bi;
+ u32 hdr_size = sizeof(struct txgbe_hic_hdr);
+ u16 buf_len;
+ u32 dword_len;
+ s32 status = 0;
+ u32 buf[64] = {};
+
+ DEBUGFUNC("\n");
+
+ if (length == 0 || length > TXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ DEBUGOUT1("Buffer length failure buffersize=%d.\n", length);
+ return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ }
+
+ if (TCALL(hw, mac.ops.acquire_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB)
+ != 0) {
+ return TXGBE_ERR_SWFW_SYNC;
+ }
+
+
+ /* Calculate length in DWORDs. We must be DWORD aligned */
+ if ((length % (sizeof(u32))) != 0) {
+ DEBUGOUT("Buffer length failure, not aligned to dword");
+ status = TXGBE_ERR_INVALID_ARGUMENT;
+ goto rel_out;
+ }
+
+ dword_len = length >> 2;
+
+ /* The device driver writes the relevant command block
+ * into the ram area.
+ */
+ for (i = 0; i < dword_len; i++) {
+ if (txgbe_check_mng_access(hw)) {
+ wr32a(hw, TXGBE_MNG_MBOX,
+ i, TXGBE_CPU_TO_LE32(buffer[i]));
+ /* write flush */
+ buf[i] = rd32a(hw, TXGBE_MNG_MBOX, i);
+ } else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ goto rel_out;
+ }
+ }
+ /* Setting this bit tells the ARC that a new command is pending. */
+ if (txgbe_check_mng_access(hw))
+ wr32m(hw, TXGBE_MNG_MBOX_CTL,
+ TXGBE_MNG_MBOX_CTL_SWRDY, TXGBE_MNG_MBOX_CTL_SWRDY);
+ else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ goto rel_out;
+ }
+
+ for (i = 0; i < timeout; i++) {
+ if (txgbe_check_mng_access(hw)) {
+ hicr = rd32(hw, TXGBE_MNG_MBOX_CTL);
+ if ((hicr & TXGBE_MNG_MBOX_CTL_FWRDY))
+ break;
+ }
+ msec_delay(1);
+ }
+
+ /* Check command completion */
+ if (timeout != 0 && i == timeout) {
+ ERROR_REPORT1(TXGBE_ERROR_CAUTION,
+ "Command has failed with no status valid.\n");
+
+ ERROR_REPORT1(TXGBE_ERROR_CAUTION, "write value:\n");
+ for (i = 0; i < dword_len; i++) {
+ ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buffer[i]);
+ }
+ ERROR_REPORT1(TXGBE_ERROR_CAUTION, "read value:\n");
+ for (i = 0; i < dword_len; i++) {
+ ERROR_REPORT1(TXGBE_ERROR_CAUTION, "%x ", buf[i]);
+ }
+
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
+ }
+
+ if (!return_data)
+ goto rel_out;
+
+ /* Calculate length in DWORDs */
+ dword_len = hdr_size >> 2;
+
+ /* first pull in the header so we know the buffer length */
+ for (bi = 0; bi < dword_len; bi++) {
+ if (txgbe_check_mng_access(hw)) {
+ buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX,
+ bi);
+ TXGBE_LE32_TO_CPUS(&buffer[bi]);
+ } else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ goto rel_out;
+ }
+ }
+
+ /* If there is any thing in data position pull it in */
+ buf_len = ((struct txgbe_hic_hdr *)buffer)->buf_len;
+ if (buf_len == 0)
+ goto rel_out;
+
+ if (length < buf_len + hdr_size) {
+ DEBUGOUT("Buffer not large enough for reply message.\n");
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
+ }
+
+ /* Calculate length in DWORDs, add 3 for odd lengths */
+ dword_len = (buf_len + 3) >> 2;
+
+ /* Pull in the rest of the buffer (bi is where we left off) */
+ for (; bi <= dword_len; bi++) {
+ if (txgbe_check_mng_access(hw)) {
+ buffer[bi] = rd32a(hw, TXGBE_MNG_MBOX,
+ bi);
+ TXGBE_LE32_TO_CPUS(&buffer[bi]);
+ } else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ goto rel_out;
+ }
+ }
+
+rel_out:
+ TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_MB);
+ return status;
+}
+
+/**
+ * txgbe_set_fw_drv_ver - Sends driver version to firmware
+ * @hw: pointer to the HW structure
+ * @maj: driver version major number
+ * @min: driver version minor number
+ * @build: driver version build number
+ * @sub: driver version sub build number
+ *
+ * Sends driver version number to firmware through the manageability
+ * block. On success return 0
+ * else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ * semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
+ u8 build, u8 sub)
+{
+ struct txgbe_hic_drv_info fw_cmd;
+ int i;
+ s32 ret_val = 0;
+
+ DEBUGFUNC("\n");
+
+ fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+ fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN;
+ fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ fw_cmd.port_num = (u8)hw->bus.func;
+ fw_cmd.ver_maj = maj;
+ fw_cmd.ver_min = min;
+ fw_cmd.ver_build = build;
+ fw_cmd.ver_sub = sub;
+ fw_cmd.hdr.checksum = 0;
+ fw_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&fw_cmd,
+ (FW_CEM_HDR_LEN + fw_cmd.hdr.buf_len));
+ fw_cmd.pad = 0;
+ fw_cmd.pad2 = 0;
+
+ for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+ ret_val = txgbe_host_interface_command(hw, (u32 *)&fw_cmd,
+ sizeof(fw_cmd),
+ TXGBE_HI_COMMAND_TIMEOUT,
+ true);
+ if (ret_val != 0)
+ continue;
+
+ if (fw_cmd.hdr.cmd_or_resp.ret_status ==
+ FW_CEM_RESP_STATUS_SUCCESS)
+ ret_val = 0;
+ else
+ ret_val = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ break;
+ }
+
+ return ret_val;
+}
+
+/**
+ * txgbe_reset_hostif - send reset cmd to fw
+ * @hw: pointer to hardware structure
+ *
+ * Sends reset cmd to firmware through the manageability
+ * block. On success return 0
+ * else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ * semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 txgbe_reset_hostif(struct txgbe_hw *hw)
+{
+ struct txgbe_hic_reset reset_cmd;
+ int i;
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ reset_cmd.hdr.cmd = FW_RESET_CMD;
+ reset_cmd.hdr.buf_len = FW_RESET_LEN;
+ reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ reset_cmd.lan_id = hw->bus.lan_id;
+ reset_cmd.reset_type = (u16)hw->reset_type;
+ reset_cmd.hdr.checksum = 0;
+ reset_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&reset_cmd,
+ (FW_CEM_HDR_LEN + reset_cmd.hdr.buf_len));
+
+ for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+ status = txgbe_host_interface_command(hw, (u32 *)&reset_cmd,
+ sizeof(reset_cmd),
+ TXGBE_HI_COMMAND_TIMEOUT,
+ true);
+ if (status != 0)
+ continue;
+
+ if (reset_cmd.hdr.cmd_or_resp.ret_status ==
+ FW_CEM_RESP_STATUS_SUCCESS) {
+ status = 0;
+ hw->link_status = TXGBE_LINK_STATUS_NONE;
+ } else
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ break;
+ }
+
+ return status;
+}
+
+s32 txgbe_setup_mac_link_hostif(struct txgbe_hw *hw, u32 speed)
+{
+ struct txgbe_hic_phy_cfg cmd;
+ int i;
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ cmd.hdr.cmd = FW_SETUP_MAC_LINK_CMD;
+ cmd.hdr.buf_len = FW_SETUP_MAC_LINK_LEN;
+ cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ cmd.lan_id = hw->bus.lan_id;
+ cmd.phy_mode = 0;
+ cmd.phy_speed = (u16)speed;
+ cmd.hdr.checksum = 0;
+ cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&cmd,
+ (FW_CEM_HDR_LEN + cmd.hdr.buf_len));
+
+ for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+ status = txgbe_host_interface_command(hw, (u32 *)&cmd,
+ sizeof(cmd),
+ TXGBE_HI_COMMAND_TIMEOUT,
+ true);
+ if (status != 0)
+ continue;
+
+ if (cmd.hdr.cmd_or_resp.ret_status ==
+ FW_CEM_RESP_STATUS_SUCCESS)
+ status = 0;
+ else
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ break;
+ }
+
+ return status;
+
+}
+
+u16 txgbe_crc16_ccitt(const u8 *buf, int size)
+{
+ u16 crc = 0;
+ int i;
+ while (--size >= 0) {
+ crc ^= (u16)*buf++ << 8;
+ for (i = 0; i < 8; i++) {
+ if (crc & 0x8000)
+ crc = crc << 1 ^ 0x1021;
+ else
+ crc <<= 1;
+ }
+ }
+ return crc;
+}
+
+s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region,
+ const u8 *data, u32 size)
+{
+ struct txgbe_hic_upg_start start_cmd;
+ struct txgbe_hic_upg_write write_cmd;
+ struct txgbe_hic_upg_verify verify_cmd;
+ u32 offset;
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ start_cmd.hdr.cmd = FW_FLASH_UPGRADE_START_CMD;
+ start_cmd.hdr.buf_len = FW_FLASH_UPGRADE_START_LEN;
+ start_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ start_cmd.module_id = (u8)region;
+ start_cmd.hdr.checksum = 0;
+ start_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&start_cmd,
+ (FW_CEM_HDR_LEN + start_cmd.hdr.buf_len));
+ start_cmd.pad2 = 0;
+ start_cmd.pad3 = 0;
+
+ status = txgbe_host_interface_command(hw, (u32 *)&start_cmd,
+ sizeof(start_cmd),
+ TXGBE_HI_FLASH_ERASE_TIMEOUT,
+ true);
+
+ if (start_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS)
+ status = 0;
+ else {
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ return status;
+ }
+
+ for (offset = 0; offset < size;) {
+ write_cmd.hdr.cmd = FW_FLASH_UPGRADE_WRITE_CMD;
+ if (size - offset > 248) {
+ write_cmd.data_len = 248 / 4;
+ write_cmd.eof_flag = 0;
+ } else {
+ write_cmd.data_len = (u8)((size - offset) / 4);
+ write_cmd.eof_flag = 1;
+ }
+ memcpy((u8 *)write_cmd.data, &data[offset], write_cmd.data_len * 4);
+ write_cmd.hdr.buf_len = (write_cmd.data_len + 1) * 4;
+ write_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ write_cmd.check_sum = txgbe_crc16_ccitt((u8 *)write_cmd.data,
+ write_cmd.data_len * 4);
+
+ status = txgbe_host_interface_command(hw, (u32 *)&write_cmd,
+ sizeof(write_cmd),
+ TXGBE_HI_FLASH_UPDATE_TIMEOUT,
+ true);
+ if (start_cmd.hdr.cmd_or_resp.ret_status ==
+ FW_CEM_RESP_STATUS_SUCCESS)
+ status = 0;
+ else {
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ return status;
+ }
+ offset += write_cmd.data_len * 4;
+ }
+
+ verify_cmd.hdr.cmd = FW_FLASH_UPGRADE_VERIFY_CMD;
+ verify_cmd.hdr.buf_len = FW_FLASH_UPGRADE_VERIFY_LEN;
+ verify_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+ switch (region) {
+ case TXGBE_MODULE_EEPROM:
+ verify_cmd.action_flag = TXGBE_RELOAD_EEPROM;
+ break;
+ case TXGBE_MODULE_FIRMWARE:
+ verify_cmd.action_flag = TXGBE_RESET_FIRMWARE;
+ break;
+ case TXGBE_MODULE_HARDWARE:
+ verify_cmd.action_flag = TXGBE_RESET_LAN;
+ break;
+ default:
+ return status;
+ }
+
+ verify_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&verify_cmd,
+ (FW_CEM_HDR_LEN + verify_cmd.hdr.buf_len));
+
+ status = txgbe_host_interface_command(hw, (u32 *)&verify_cmd,
+ sizeof(verify_cmd),
+ TXGBE_HI_FLASH_VERIFY_TIMEOUT,
+ true);
+
+ if (verify_cmd.hdr.cmd_or_resp.ret_status == FW_CEM_RESP_STATUS_SUCCESS)
+ status = 0;
+ else {
+ status = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+ }
+ return status;
+}
+
+/**
+ * txgbe_set_rxpba - Initialize Rx packet buffer
+ * @hw: pointer to hardware structure
+ * @num_pb: number of packet buffers to allocate
+ * @headroom: reserve n KB of headroom
+ * @strategy: packet buffer allocation strategy
+ **/
+void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+ int strategy)
+{
+ u32 pbsize = hw->mac.rx_pb_size;
+ int i = 0;
+ u32 rxpktsize, txpktsize, txpbthresh;
+
+ DEBUGFUNC("\n");
+
+ /* Reserve headroom */
+ pbsize -= headroom;
+
+ if (!num_pb)
+ num_pb = 1;
+
+ /* Divide remaining packet buffer space amongst the number of packet
+ * buffers requested using supplied strategy.
+ */
+ switch (strategy) {
+ case PBA_STRATEGY_WEIGHTED:
+ /* txgbe_dcb_pba_80_48 strategy weight first half of packet
+ * buffer with 5/8 of the packet buffer space.
+ */
+ rxpktsize = (pbsize * 5) / (num_pb * 4);
+ pbsize -= rxpktsize * (num_pb / 2);
+ rxpktsize <<= TXGBE_RDB_PB_SZ_SHIFT;
+ for (; i < (num_pb / 2); i++)
+ wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize);
+ /* fall through */
+ /* Fall through to configure remaining packet buffers */
+ case PBA_STRATEGY_EQUAL:
+ rxpktsize = (pbsize / (num_pb - i)) << TXGBE_RDB_PB_SZ_SHIFT;
+ for (; i < num_pb; i++)
+ wr32(hw, TXGBE_RDB_PB_SZ(i), rxpktsize);
+ break;
+ default:
+ break;
+ }
+
+ /* Only support an equally distributed Tx packet buffer strategy. */
+ txpktsize = TXGBE_TDB_PB_SZ_MAX / num_pb;
+ txpbthresh = (txpktsize / 1024) - TXGBE_TXPKT_SIZE_MAX;
+ for (i = 0; i < num_pb; i++) {
+ wr32(hw, TXGBE_TDB_PB_SZ(i), txpktsize);
+ wr32(hw, TXGBE_TDM_PB_THRE(i), txpbthresh);
+ }
+
+ /* Clear unused TCs, if any, to zero buffer size*/
+ for (; i < TXGBE_MAX_PB; i++) {
+ wr32(hw, TXGBE_RDB_PB_SZ(i), 0);
+ wr32(hw, TXGBE_TDB_PB_SZ(i), 0);
+ wr32(hw, TXGBE_TDM_PB_THRE(i), 0);
+ }
+}
+
+STATIC const u8 txgbe_emc_temp_data[4] = {
+ TXGBE_EMC_INTERNAL_DATA,
+ TXGBE_EMC_DIODE1_DATA,
+ TXGBE_EMC_DIODE2_DATA,
+ TXGBE_EMC_DIODE3_DATA
+};
+STATIC const u8 txgbe_emc_therm_limit[4] = {
+ TXGBE_EMC_INTERNAL_THERM_LIMIT,
+ TXGBE_EMC_DIODE1_THERM_LIMIT,
+ TXGBE_EMC_DIODE2_THERM_LIMIT,
+ TXGBE_EMC_DIODE3_THERM_LIMIT
+};
+
+/**
+ * txgbe_get_thermal_sensor_data - Gathers thermal sensor data
+ * @hw: pointer to hardware structure
+ * @data: pointer to the thermal sensor data structure
+ *
+ * algorithm:
+ * T = (-4.8380E+01)N^0 + (3.1020E-01)N^1 + (-1.8201E-04)N^2 +
+ (8.1542E-08)N^3 + (-1.6743E-11)N^4
+ * algorithm with 5% more deviation, easy for implementation
+ * T = (-50)N^0 + (0.31)N^1 + (-0.0002)N^2 + (0.0000001)N^3
+ *
+ * Returns the thermal sensor data structure
+ **/
+s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw)
+{
+ s64 tsv;
+ int i = 0;
+ struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+ DEBUGFUNC("\n");
+
+ /* Only support thermal sensors attached to physical port 0 */
+ if (hw->bus.lan_id)
+ return TXGBE_NOT_IMPLEMENTED;
+
+ tsv = (s64)(rd32(hw, TXGBE_TS_ST) &
+ TXGBE_TS_ST_DATA_OUT_MASK);
+
+ tsv = tsv < 1200 ? tsv : 1200;
+ tsv = -(48380 << 8) / 1000
+ + tsv * (31020 << 8) / 100000
+ - tsv * tsv * (18201 << 8) / 100000000
+ + tsv * tsv * tsv * (81542 << 8) / 1000000000000
+ - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000;
+ tsv >>= 8;
+
+ data->sensor.temp = (s16)tsv;
+
+ for (i = 0; i < 100 ; i++) {
+ tsv = (s64)rd32(hw, TXGBE_TS_ST);
+ if (tsv >> 16 == 0x1) {
+ tsv = tsv & TXGBE_TS_ST_DATA_OUT_MASK;
+ tsv = tsv < 1200 ? tsv : 1200;
+ tsv = -(48380 << 8) / 1000
+ + tsv * (31020 << 8) / 100000
+ - tsv * tsv * (18201 << 8) / 100000000
+ + tsv * tsv * tsv * (81542 << 8) / 1000000000000
+ - tsv * tsv * tsv * tsv * (16743 << 8) / 1000000000000000;
+ tsv >>= 8;
+
+ data->sensor.temp = (s16)tsv;
+ break;
+ } else {
+ msleep(1);
+ continue;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
+ * @hw: pointer to hardware structure
+ *
+ * Inits the thermal sensor thresholds according to the NVM map
+ * and save off the threshold and location values into mac.thermal_sensor_data
+ **/
+s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+
+ struct txgbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+ DEBUGFUNC("\n");
+
+ memset(data, 0, sizeof(struct txgbe_thermal_sensor_data));
+
+ /* Only support thermal sensors attached to SP physical port 0 */
+ if (hw->bus.lan_id)
+ return TXGBE_NOT_IMPLEMENTED;
+
+ wr32(hw, TXGBE_TS_CTL, TXGBE_TS_CTL_EVAL_MD);
+ wr32(hw, TXGBE_TS_INT_EN,
+ TXGBE_TS_INT_EN_ALARM_INT_EN | TXGBE_TS_INT_EN_DALARM_INT_EN);
+ wr32(hw, TXGBE_TS_EN, TXGBE_TS_EN_ENA);
+
+
+ data->sensor.alarm_thresh = 100;
+ wr32(hw, TXGBE_TS_ALARM_THRE, 677);
+ data->sensor.dalarm_thresh = 90;
+ wr32(hw, TXGBE_TS_DALARM_THRE, 614);
+
+ return status;
+}
+
+void txgbe_disable_rx(struct txgbe_hw *hw)
+{
+ u32 pfdtxgswc;
+ u32 rxctrl;
+
+ DEBUGFUNC("\n");
+
+ rxctrl = rd32(hw, TXGBE_RDB_PB_CTL);
+ if (rxctrl & TXGBE_RDB_PB_CTL_RXEN) {
+ pfdtxgswc = rd32(hw, TXGBE_PSR_CTL);
+ if (pfdtxgswc & TXGBE_PSR_CTL_SW_EN) {
+ pfdtxgswc &= ~TXGBE_PSR_CTL_SW_EN;
+ wr32(hw, TXGBE_PSR_CTL, pfdtxgswc);
+ hw->mac.set_lben = true;
+ } else {
+ hw->mac.set_lben = false;
+ }
+ rxctrl &= ~TXGBE_RDB_PB_CTL_RXEN;
+ wr32(hw, TXGBE_RDB_PB_CTL, rxctrl);
+ /* errata 14 */
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ do {
+ do {
+ if (rd32m(hw,
+ TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_DISABLED) == 1)
+ break;
+ msleep(10);
+ } while (1);
+ if (rd32m(hw, TXGBE_RDB_TXSWERR,
+ TXGBE_RDB_TXSWERR_TB_FREE) == 0x143)
+ break;
+ else {
+ wr32m(hw,
+ TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_RXEN,
+ TXGBE_RDB_PB_CTL_RXEN);
+ wr32m(hw,
+ TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_RXEN,
+ ~TXGBE_RDB_PB_CTL_RXEN);
+
+ }
+ } while (1);
+ }
+
+ if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+ ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+ /* disable mac receiver */
+ wr32m(hw, TXGBE_MAC_RX_CFG,
+ TXGBE_MAC_RX_CFG_RE, 0);
+ }
+ }
+}
+
+void txgbe_enable_rx(struct txgbe_hw *hw)
+{
+ u32 pfdtxgswc;
+
+ DEBUGFUNC("\n");
+
+ /* enable mac receiver */
+ wr32m(hw, TXGBE_MAC_RX_CFG,
+ TXGBE_MAC_RX_CFG_RE, TXGBE_MAC_RX_CFG_RE);
+
+ wr32m(hw, TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
+
+ if (hw->mac.set_lben) {
+ pfdtxgswc = rd32(hw, TXGBE_PSR_CTL);
+ pfdtxgswc |= TXGBE_PSR_CTL_SW_EN;
+ wr32(hw, TXGBE_PSR_CTL, pfdtxgswc);
+ hw->mac.set_lben = false;
+ }
+}
+
+/**
+ * txgbe_mng_present - returns true when management capability is present
+ * @hw: pointer to hardware structure
+ */
+bool txgbe_mng_present(struct txgbe_hw *hw)
+{
+ u32 fwsm;
+
+ fwsm = rd32(hw, TXGBE_MIS_ST);
+ return fwsm & TXGBE_MIS_ST_MNG_INIT_DN;
+}
+
+bool txgbe_check_mng_access(struct txgbe_hw *hw)
+{
+ bool ret = false;
+ u32 rst_delay;
+ u32 i;
+
+ struct txgbe_adapter *adapter = hw->back;
+ if (!txgbe_mng_present(hw))
+ return false;
+ if (adapter->hw.revision_id != TXGBE_SP_MPW)
+ return true;
+ if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED))
+ return true;
+
+ rst_delay = (rd32(&adapter->hw, TXGBE_MIS_RST_ST) &
+ TXGBE_MIS_RST_ST_RST_INIT) >>
+ TXGBE_MIS_RST_ST_RST_INI_SHIFT;
+ for (i = 0; i < rst_delay + 2; i++) {
+ if (!(adapter->flags2 & TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED)) {
+ ret = true;
+ break;
+ }
+ msleep(100);
+ }
+ return ret;
+}
+
+/**
+ * txgbe_setup_mac_link_multispeed_fiber - Set MAC link speed
+ * @hw: pointer to hardware structure
+ * @speed: new link speed
+ * @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ * Set the link speed in the MAC and/or PHY register and restarts link.
+ **/
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+ u32 highest_link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+ s32 status = 0;
+ u32 speedcnt = 0;
+ u32 i = 0;
+ bool autoneg, link_up = false;
+
+ DEBUGFUNC("\n");
+
+ /* Mask off requested but non-supported speeds */
+ status = TCALL(hw, mac.ops.get_link_capabilities,
+ &link_speed, &autoneg);
+ if (status != 0)
+ return status;
+
+ speed &= link_speed;
+
+ /* Try each speed one by one, highest priority first. We do this in
+ * software because 10Gb fiber doesn't support speed autonegotiation.
+ */
+ if (speed & TXGBE_LINK_SPEED_10GB_FULL) {
+ speedcnt++;
+ highest_link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+
+ /* If we already have link at this speed, just jump out */
+ status = TCALL(hw, mac.ops.check_link,
+ &link_speed, &link_up, false);
+ if (status != 0)
+ return status;
+
+ if ((link_speed == TXGBE_LINK_SPEED_10GB_FULL) && link_up)
+ goto out;
+
+ /* Allow module to change analog characteristics (1G->10G) */
+ msec_delay(40);
+
+ status = TCALL(hw, mac.ops.setup_mac_link,
+ TXGBE_LINK_SPEED_10GB_FULL,
+ autoneg_wait_to_complete);
+ if (status != 0)
+ return status;
+
+ /* Flap the Tx laser if it has not already been done */
+ TCALL(hw, mac.ops.flap_tx_laser);
+
+ /* Wait for the controller to acquire link. Per IEEE 802.3ap,
+ * Section 73.10.2, we may have to wait up to 500ms if KR is
+ * attempted. sapphire uses the same timing for 10g SFI.
+ */
+ for (i = 0; i < 5; i++) {
+ /* Wait for the link partner to also set speed */
+ msec_delay(100);
+
+ /* If we have link, just jump out */
+ status = TCALL(hw, mac.ops.check_link,
+ &link_speed, &link_up, false);
+ if (status != 0)
+ return status;
+
+ if (link_up)
+ goto out;
+ }
+ }
+
+ if (speed & TXGBE_LINK_SPEED_1GB_FULL) {
+ speedcnt++;
+ if (highest_link_speed == TXGBE_LINK_SPEED_UNKNOWN)
+ highest_link_speed = TXGBE_LINK_SPEED_1GB_FULL;
+
+ /* If we already have link at this speed, just jump out */
+ status = TCALL(hw, mac.ops.check_link,
+ &link_speed, &link_up, false);
+ if (status != 0)
+ return status;
+
+ if ((link_speed == TXGBE_LINK_SPEED_1GB_FULL) && link_up)
+ goto out;
+
+ /* Allow module to change analog characteristics (10G->1G) */
+ msec_delay(40);
+
+ status = TCALL(hw, mac.ops.setup_mac_link,
+ TXGBE_LINK_SPEED_1GB_FULL,
+ autoneg_wait_to_complete);
+ if (status != 0)
+ return status;
+
+ /* Flap the Tx laser if it has not already been done */
+ TCALL(hw, mac.ops.flap_tx_laser);
+
+ /* Wait for the link partner to also set speed */
+ msec_delay(100);
+
+ /* If we have link, just jump out */
+ status = TCALL(hw, mac.ops.check_link,
+ &link_speed, &link_up, false);
+ if (status != 0)
+ return status;
+
+ if (link_up)
+ goto out;
+ }
+
+ /* We didn't get link. Configure back to the highest speed we tried,
+ * (if there was more than one). We call ourselves back with just the
+ * single highest speed that the user requested.
+ */
+ if (speedcnt > 1)
+ status = txgbe_setup_mac_link_multispeed_fiber(hw,
+ highest_link_speed,
+ autoneg_wait_to_complete);
+
+out:
+ /* Set autoneg_advertised value based on input link speed */
+ hw->phy.autoneg_advertised = 0;
+
+ if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+ if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+ return status;
+}
+
+int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
+{
+ u32 i = 0;
+ u32 reg = 0;
+ int err = 0;
+ /* if there's flash existing */
+ if (!(rd32(hw, TXGBE_SPI_STATUS) &
+ TXGBE_SPI_STATUS_FLASH_BYPASS)) {
+ /* wait hw load flash done */
+ for (i = 0; i < TXGBE_MAX_FLASH_LOAD_POLL_TIME; i++) {
+ reg = rd32(hw, TXGBE_SPI_ILDR_STATUS);
+ if (!(reg & check_bit)) {
+ /* done */
+ break;
+ }
+ msleep(200);
+ }
+ if (i == TXGBE_MAX_FLASH_LOAD_POLL_TIME) {
+ err = TXGBE_ERR_FLASH_LOADING_FAILED;
+ }
+ }
+ return err;
+}
+
+/* The txgbe_ptype_lookup is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT txgbe_ptype_lookup[ptype].known
+ * THEN
+ * Packet is unknown
+ * ELSE IF txgbe_ptype_lookup[ptype].mac == TXGBE_DEC_PTYPE_MAC_IP
+ * Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ * Use the enum txgbe_l2_ptypes to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define TXGBE_PTT(ptype, mac, ip, etype, eip, proto, layer)\
+ { ptype, \
+ 1, \
+ /* mac */ TXGBE_DEC_PTYPE_MAC_##mac, \
+ /* ip */ TXGBE_DEC_PTYPE_IP_##ip, \
+ /* etype */ TXGBE_DEC_PTYPE_ETYPE_##etype, \
+ /* eip */ TXGBE_DEC_PTYPE_IP_##eip, \
+ /* proto */ TXGBE_DEC_PTYPE_PROT_##proto, \
+ /* layer */ TXGBE_DEC_PTYPE_LAYER_##layer }
+
+#define TXGBE_UKN(ptype) \
+ { ptype, 0, 0, 0, 0, 0, 0, 0 }
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+/* for ((pt=0;pt<256;pt++)); do printf "macro(0x%02X),\n" $pt; done */
+txgbe_dptype txgbe_ptype_lookup[256] = {
+ TXGBE_UKN(0x00),
+ TXGBE_UKN(0x01),
+ TXGBE_UKN(0x02),
+ TXGBE_UKN(0x03),
+ TXGBE_UKN(0x04),
+ TXGBE_UKN(0x05),
+ TXGBE_UKN(0x06),
+ TXGBE_UKN(0x07),
+ TXGBE_UKN(0x08),
+ TXGBE_UKN(0x09),
+ TXGBE_UKN(0x0A),
+ TXGBE_UKN(0x0B),
+ TXGBE_UKN(0x0C),
+ TXGBE_UKN(0x0D),
+ TXGBE_UKN(0x0E),
+ TXGBE_UKN(0x0F),
+
+ /* L2: mac */
+ TXGBE_UKN(0x10),
+ TXGBE_PTT(0x11, L2, NONE, NONE, NONE, NONE, PAY2),
+ TXGBE_PTT(0x12, L2, NONE, NONE, NONE, TS, PAY2),
+ TXGBE_PTT(0x13, L2, NONE, NONE, NONE, NONE, PAY2),
+ TXGBE_PTT(0x14, L2, NONE, NONE, NONE, NONE, PAY2),
+ TXGBE_PTT(0x15, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x16, L2, NONE, NONE, NONE, NONE, PAY2),
+ TXGBE_PTT(0x17, L2, NONE, NONE, NONE, NONE, NONE),
+
+ /* L2: ethertype filter */
+ TXGBE_PTT(0x18, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x19, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1A, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1B, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1C, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1D, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1E, L2, NONE, NONE, NONE, NONE, NONE),
+ TXGBE_PTT(0x1F, L2, NONE, NONE, NONE, NONE, NONE),
+
+ /* L3: ip non-tunnel */
+ TXGBE_UKN(0x20),
+ TXGBE_PTT(0x21, IP, FGV4, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x22, IP, IPV4, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x23, IP, IPV4, NONE, NONE, UDP, PAY4),
+ TXGBE_PTT(0x24, IP, IPV4, NONE, NONE, TCP, PAY4),
+ TXGBE_PTT(0x25, IP, IPV4, NONE, NONE, SCTP, PAY4),
+ TXGBE_UKN(0x26),
+ TXGBE_UKN(0x27),
+ TXGBE_UKN(0x28),
+ TXGBE_PTT(0x29, IP, FGV6, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x2A, IP, IPV6, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x2B, IP, IPV6, NONE, NONE, UDP, PAY3),
+ TXGBE_PTT(0x2C, IP, IPV6, NONE, NONE, TCP, PAY4),
+ TXGBE_PTT(0x2D, IP, IPV6, NONE, NONE, SCTP, PAY4),
+ TXGBE_UKN(0x2E),
+ TXGBE_UKN(0x2F),
+
+ /* L2: fcoe */
+ TXGBE_PTT(0x30, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x31, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x32, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x33, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x34, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_UKN(0x35),
+ TXGBE_UKN(0x36),
+ TXGBE_UKN(0x37),
+ TXGBE_PTT(0x38, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x39, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x3A, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x3B, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_PTT(0x3C, FCOE, NONE, NONE, NONE, NONE, PAY3),
+ TXGBE_UKN(0x3D),
+ TXGBE_UKN(0x3E),
+ TXGBE_UKN(0x3F),
+
+ TXGBE_UKN(0x40),
+ TXGBE_UKN(0x41),
+ TXGBE_UKN(0x42),
+ TXGBE_UKN(0x43),
+ TXGBE_UKN(0x44),
+ TXGBE_UKN(0x45),
+ TXGBE_UKN(0x46),
+ TXGBE_UKN(0x47),
+ TXGBE_UKN(0x48),
+ TXGBE_UKN(0x49),
+ TXGBE_UKN(0x4A),
+ TXGBE_UKN(0x4B),
+ TXGBE_UKN(0x4C),
+ TXGBE_UKN(0x4D),
+ TXGBE_UKN(0x4E),
+ TXGBE_UKN(0x4F),
+ TXGBE_UKN(0x50),
+ TXGBE_UKN(0x51),
+ TXGBE_UKN(0x52),
+ TXGBE_UKN(0x53),
+ TXGBE_UKN(0x54),
+ TXGBE_UKN(0x55),
+ TXGBE_UKN(0x56),
+ TXGBE_UKN(0x57),
+ TXGBE_UKN(0x58),
+ TXGBE_UKN(0x59),
+ TXGBE_UKN(0x5A),
+ TXGBE_UKN(0x5B),
+ TXGBE_UKN(0x5C),
+ TXGBE_UKN(0x5D),
+ TXGBE_UKN(0x5E),
+ TXGBE_UKN(0x5F),
+ TXGBE_UKN(0x60),
+ TXGBE_UKN(0x61),
+ TXGBE_UKN(0x62),
+ TXGBE_UKN(0x63),
+ TXGBE_UKN(0x64),
+ TXGBE_UKN(0x65),
+ TXGBE_UKN(0x66),
+ TXGBE_UKN(0x67),
+ TXGBE_UKN(0x68),
+ TXGBE_UKN(0x69),
+ TXGBE_UKN(0x6A),
+ TXGBE_UKN(0x6B),
+ TXGBE_UKN(0x6C),
+ TXGBE_UKN(0x6D),
+ TXGBE_UKN(0x6E),
+ TXGBE_UKN(0x6F),
+ TXGBE_UKN(0x70),
+ TXGBE_UKN(0x71),
+ TXGBE_UKN(0x72),
+ TXGBE_UKN(0x73),
+ TXGBE_UKN(0x74),
+ TXGBE_UKN(0x75),
+ TXGBE_UKN(0x76),
+ TXGBE_UKN(0x77),
+ TXGBE_UKN(0x78),
+ TXGBE_UKN(0x79),
+ TXGBE_UKN(0x7A),
+ TXGBE_UKN(0x7B),
+ TXGBE_UKN(0x7C),
+ TXGBE_UKN(0x7D),
+ TXGBE_UKN(0x7E),
+ TXGBE_UKN(0x7F),
+
+ /* IPv4 --> IPv4/IPv6 */
+ TXGBE_UKN(0x80),
+ TXGBE_PTT(0x81, IP, IPV4, IPIP, FGV4, NONE, PAY3),
+ TXGBE_PTT(0x82, IP, IPV4, IPIP, IPV4, NONE, PAY3),
+ TXGBE_PTT(0x83, IP, IPV4, IPIP, IPV4, UDP, PAY4),
+ TXGBE_PTT(0x84, IP, IPV4, IPIP, IPV4, TCP, PAY4),
+ TXGBE_PTT(0x85, IP, IPV4, IPIP, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0x86),
+ TXGBE_UKN(0x87),
+ TXGBE_UKN(0x88),
+ TXGBE_PTT(0x89, IP, IPV4, IPIP, FGV6, NONE, PAY3),
+ TXGBE_PTT(0x8A, IP, IPV4, IPIP, IPV6, NONE, PAY3),
+ TXGBE_PTT(0x8B, IP, IPV4, IPIP, IPV6, UDP, PAY4),
+ TXGBE_PTT(0x8C, IP, IPV4, IPIP, IPV6, TCP, PAY4),
+ TXGBE_PTT(0x8D, IP, IPV4, IPIP, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0x8E),
+ TXGBE_UKN(0x8F),
+
+ /* IPv4 --> GRE/NAT --> NONE/IPv4/IPv6 */
+ TXGBE_PTT(0x90, IP, IPV4, IG, NONE, NONE, PAY3),
+ TXGBE_PTT(0x91, IP, IPV4, IG, FGV4, NONE, PAY3),
+ TXGBE_PTT(0x92, IP, IPV4, IG, IPV4, NONE, PAY3),
+ TXGBE_PTT(0x93, IP, IPV4, IG, IPV4, UDP, PAY4),
+ TXGBE_PTT(0x94, IP, IPV4, IG, IPV4, TCP, PAY4),
+ TXGBE_PTT(0x95, IP, IPV4, IG, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0x96),
+ TXGBE_UKN(0x97),
+ TXGBE_UKN(0x98),
+ TXGBE_PTT(0x99, IP, IPV4, IG, FGV6, NONE, PAY3),
+ TXGBE_PTT(0x9A, IP, IPV4, IG, IPV6, NONE, PAY3),
+ TXGBE_PTT(0x9B, IP, IPV4, IG, IPV6, UDP, PAY4),
+ TXGBE_PTT(0x9C, IP, IPV4, IG, IPV6, TCP, PAY4),
+ TXGBE_PTT(0x9D, IP, IPV4, IG, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0x9E),
+ TXGBE_UKN(0x9F),
+
+ /* IPv4 --> GRE/NAT --> MAC --> NONE/IPv4/IPv6 */
+ TXGBE_PTT(0xA0, IP, IPV4, IGM, NONE, NONE, PAY3),
+ TXGBE_PTT(0xA1, IP, IPV4, IGM, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xA2, IP, IPV4, IGM, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xA3, IP, IPV4, IGM, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xA4, IP, IPV4, IGM, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xA5, IP, IPV4, IGM, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xA6),
+ TXGBE_UKN(0xA7),
+ TXGBE_UKN(0xA8),
+ TXGBE_PTT(0xA9, IP, IPV4, IGM, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xAA, IP, IPV4, IGM, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xAB, IP, IPV4, IGM, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xAC, IP, IPV4, IGM, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xAD, IP, IPV4, IGM, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xAE),
+ TXGBE_UKN(0xAF),
+
+ /* IPv4 --> GRE/NAT --> MAC+VLAN --> NONE/IPv4/IPv6 */
+ TXGBE_PTT(0xB0, IP, IPV4, IGMV, NONE, NONE, PAY3),
+ TXGBE_PTT(0xB1, IP, IPV4, IGMV, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xB2, IP, IPV4, IGMV, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xB3, IP, IPV4, IGMV, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xB4, IP, IPV4, IGMV, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xB5, IP, IPV4, IGMV, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xB6),
+ TXGBE_UKN(0xB7),
+ TXGBE_UKN(0xB8),
+ TXGBE_PTT(0xB9, IP, IPV4, IGMV, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xBA, IP, IPV4, IGMV, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xBB, IP, IPV4, IGMV, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xBC, IP, IPV4, IGMV, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xBD, IP, IPV4, IGMV, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xBE),
+ TXGBE_UKN(0xBF),
+
+ /* IPv6 --> IPv4/IPv6 */
+ TXGBE_UKN(0xC0),
+ TXGBE_PTT(0xC1, IP, IPV6, IPIP, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xC2, IP, IPV6, IPIP, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xC3, IP, IPV6, IPIP, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xC4, IP, IPV6, IPIP, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xC5, IP, IPV6, IPIP, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xC6),
+ TXGBE_UKN(0xC7),
+ TXGBE_UKN(0xC8),
+ TXGBE_PTT(0xC9, IP, IPV6, IPIP, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xCA, IP, IPV6, IPIP, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xCB, IP, IPV6, IPIP, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xCC, IP, IPV6, IPIP, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xCD, IP, IPV6, IPIP, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xCE),
+ TXGBE_UKN(0xCF),
+
+ /* IPv6 --> GRE/NAT -> NONE/IPv4/IPv6 */
+ TXGBE_PTT(0xD0, IP, IPV6, IG, NONE, NONE, PAY3),
+ TXGBE_PTT(0xD1, IP, IPV6, IG, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xD2, IP, IPV6, IG, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xD3, IP, IPV6, IG, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xD4, IP, IPV6, IG, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xD5, IP, IPV6, IG, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xD6),
+ TXGBE_UKN(0xD7),
+ TXGBE_UKN(0xD8),
+ TXGBE_PTT(0xD9, IP, IPV6, IG, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xDA, IP, IPV6, IG, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xDB, IP, IPV6, IG, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xDC, IP, IPV6, IG, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xDD, IP, IPV6, IG, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xDE),
+ TXGBE_UKN(0xDF),
+
+ /* IPv6 --> GRE/NAT -> MAC -> NONE/IPv4/IPv6 */
+ TXGBE_PTT(0xE0, IP, IPV6, IGM, NONE, NONE, PAY3),
+ TXGBE_PTT(0xE1, IP, IPV6, IGM, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xE2, IP, IPV6, IGM, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xE3, IP, IPV6, IGM, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xE4, IP, IPV6, IGM, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xE5, IP, IPV6, IGM, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xE6),
+ TXGBE_UKN(0xE7),
+ TXGBE_UKN(0xE8),
+ TXGBE_PTT(0xE9, IP, IPV6, IGM, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xEA, IP, IPV6, IGM, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xEB, IP, IPV6, IGM, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xEC, IP, IPV6, IGM, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xED, IP, IPV6, IGM, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xEE),
+ TXGBE_UKN(0xEF),
+
+ /* IPv6 --> GRE/NAT -> MAC--> NONE/IPv */
+ TXGBE_PTT(0xF0, IP, IPV6, IGMV, NONE, NONE, PAY3),
+ TXGBE_PTT(0xF1, IP, IPV6, IGMV, FGV4, NONE, PAY3),
+ TXGBE_PTT(0xF2, IP, IPV6, IGMV, IPV4, NONE, PAY3),
+ TXGBE_PTT(0xF3, IP, IPV6, IGMV, IPV4, UDP, PAY4),
+ TXGBE_PTT(0xF4, IP, IPV6, IGMV, IPV4, TCP, PAY4),
+ TXGBE_PTT(0xF5, IP, IPV6, IGMV, IPV4, SCTP, PAY4),
+ TXGBE_UKN(0xF6),
+ TXGBE_UKN(0xF7),
+ TXGBE_UKN(0xF8),
+ TXGBE_PTT(0xF9, IP, IPV6, IGMV, FGV6, NONE, PAY3),
+ TXGBE_PTT(0xFA, IP, IPV6, IGMV, IPV6, NONE, PAY3),
+ TXGBE_PTT(0xFB, IP, IPV6, IGMV, IPV6, UDP, PAY4),
+ TXGBE_PTT(0xFC, IP, IPV6, IGMV, IPV6, TCP, PAY4),
+ TXGBE_PTT(0xFD, IP, IPV6, IGMV, IPV6, SCTP, PAY4),
+ TXGBE_UKN(0xFE),
+ TXGBE_UKN(0xFF),
+};
+
+
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
+{
+ struct txgbe_mac_info *mac = &hw->mac;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * enable the laser control functions for SFP+ fiber
+ * and MNG not enabled
+ */
+ if ((TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_fiber) &&
+ !txgbe_mng_present(hw)) {
+ mac->ops.disable_tx_laser =
+ txgbe_disable_tx_laser_multispeed_fiber;
+ mac->ops.enable_tx_laser =
+ txgbe_enable_tx_laser_multispeed_fiber;
+ mac->ops.flap_tx_laser = txgbe_flap_tx_laser_multispeed_fiber;
+
+ } else {
+ mac->ops.disable_tx_laser =
+ txgbe_disable_tx_laser_multispeed_fiber;
+ mac->ops.enable_tx_laser =
+ txgbe_enable_tx_laser_multispeed_fiber;
+ mac->ops.flap_tx_laser = txgbe_flap_tx_laser_multispeed_fiber;
+ }
+
+ if (hw->phy.multispeed_fiber) {
+ /* Set up dual speed SFP+ support */
+ mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
+ mac->ops.setup_mac_link = txgbe_setup_mac_link;
+ mac->ops.set_rate_select_speed =
+ txgbe_set_hard_rate_select_speed;
+ } else {
+ mac->ops.setup_link = txgbe_setup_mac_link;
+ mac->ops.set_rate_select_speed =
+ txgbe_set_hard_rate_select_speed;
+ }
+}
+
+/**
+ * txgbe_init_phy_ops - PHY/SFP specific init
+ * @hw: pointer to hardware structure
+ *
+ * Initialize any function pointers that were not able to be
+ * set during init_shared_code because the PHY/SFP type was
+ * not known. Perform the SFP init if necessary.
+ *
+ **/
+s32 txgbe_init_phy_ops(struct txgbe_hw *hw)
+{
+ struct txgbe_mac_info *mac = &hw->mac;
+ s32 ret_val = 0;
+
+ DEBUGFUNC("\n");
+
+ txgbe_init_i2c(hw);
+ /* Identify the PHY or SFP module */
+ ret_val = TCALL(hw, phy.ops.identify);
+ if (ret_val == TXGBE_ERR_SFP_NOT_SUPPORTED)
+ goto init_phy_ops_out;
+
+ /* Setup function pointers based on detected SFP module and speeds */
+ txgbe_init_mac_link_ops(hw);
+ if (hw->phy.sfp_type != txgbe_sfp_type_unknown)
+ hw->phy.ops.reset = NULL;
+
+ /* If copper media, overwrite with copper function pointers */
+ if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper) {
+ hw->phy.type = txgbe_phy_xaui;
+ if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) {
+ mac->ops.setup_link = txgbe_setup_copper_link;
+ mac->ops.get_link_capabilities =
+ txgbe_get_copper_link_capabilities;
+ }
+ }
+
+init_phy_ops_out:
+ return ret_val;
+}
+
+
+/**
+ * txgbe_init_ops - Inits func ptrs and MAC type
+ * @hw: pointer to hardware structure
+ *
+ * Initialize the function pointers and assign the MAC type for sapphire.
+ * Does not touch the hardware.
+ **/
+
+s32 txgbe_init_ops(struct txgbe_hw *hw)
+{
+ struct txgbe_mac_info *mac = &hw->mac;
+ struct txgbe_phy_info *phy = &hw->phy;
+ struct txgbe_eeprom_info *eeprom = &hw->eeprom;
+ struct txgbe_flash_info *flash = &hw->flash;
+ s32 ret_val = 0;
+
+ DEBUGFUNC("\n");
+
+ /* PHY */
+ phy->ops.reset = txgbe_reset_phy;
+ phy->ops.read_reg = txgbe_read_phy_reg;
+ phy->ops.write_reg = txgbe_write_phy_reg;
+ phy->ops.read_reg_mdi = txgbe_read_phy_reg_mdi;
+ phy->ops.write_reg_mdi = txgbe_write_phy_reg_mdi;
+ phy->ops.setup_link = txgbe_setup_phy_link;
+ phy->ops.setup_link_speed = txgbe_setup_phy_link_speed;
+ phy->ops.read_i2c_byte = txgbe_read_i2c_byte;
+ phy->ops.write_i2c_byte = txgbe_write_i2c_byte;
+ phy->ops.read_i2c_sff8472 = txgbe_read_i2c_sff8472;
+ phy->ops.read_i2c_eeprom = txgbe_read_i2c_eeprom;
+ phy->ops.write_i2c_eeprom = txgbe_write_i2c_eeprom;
+ phy->ops.identify_sfp = txgbe_identify_module;
+ phy->sfp_type = txgbe_sfp_type_unknown;
+ phy->ops.check_overtemp = txgbe_tn_check_overtemp;
+ phy->ops.identify = txgbe_identify_phy;
+ phy->ops.init = txgbe_init_phy_ops;
+
+ /* MAC */
+ mac->ops.init_hw = txgbe_init_hw;
+ mac->ops.clear_hw_cntrs = txgbe_clear_hw_cntrs;
+ mac->ops.get_mac_addr = txgbe_get_mac_addr;
+ mac->ops.stop_adapter = txgbe_stop_adapter;
+ mac->ops.get_bus_info = txgbe_get_bus_info;
+ mac->ops.set_lan_id = txgbe_set_lan_id_multi_port_pcie;
+ mac->ops.acquire_swfw_sync = txgbe_acquire_swfw_sync;
+ mac->ops.release_swfw_sync = txgbe_release_swfw_sync;
+ mac->ops.reset_hw = txgbe_reset_hw;
+ mac->ops.get_media_type = txgbe_get_media_type;
+ mac->ops.disable_sec_rx_path = txgbe_disable_sec_rx_path;
+ mac->ops.enable_sec_rx_path = txgbe_enable_sec_rx_path;
+ mac->ops.enable_rx_dma = txgbe_enable_rx_dma;
+ mac->ops.start_hw = txgbe_start_hw;
+ mac->ops.get_san_mac_addr = txgbe_get_san_mac_addr;
+ mac->ops.set_san_mac_addr = txgbe_set_san_mac_addr;
+ mac->ops.get_device_caps = txgbe_get_device_caps;
+ mac->ops.get_wwn_prefix = txgbe_get_wwn_prefix;
+ mac->ops.setup_eee = txgbe_setup_eee;
+
+ /* LEDs */
+ mac->ops.led_on = txgbe_led_on;
+ mac->ops.led_off = txgbe_led_off;
+
+ /* RAR, Multicast, VLAN */
+ mac->ops.set_rar = txgbe_set_rar;
+ mac->ops.clear_rar = txgbe_clear_rar;
+ mac->ops.init_rx_addrs = txgbe_init_rx_addrs;
+ mac->ops.update_uc_addr_list = txgbe_update_uc_addr_list;
+ mac->ops.update_mc_addr_list = txgbe_update_mc_addr_list;
+ mac->ops.enable_mc = txgbe_enable_mc;
+ mac->ops.disable_mc = txgbe_disable_mc;
+ mac->ops.enable_rx = txgbe_enable_rx;
+ mac->ops.disable_rx = txgbe_disable_rx;
+ mac->ops.set_vmdq_san_mac = txgbe_set_vmdq_san_mac;
+ mac->ops.insert_mac_addr = txgbe_insert_mac_addr;
+ mac->rar_highwater = 1;
+ mac->ops.set_vfta = txgbe_set_vfta;
+ mac->ops.set_vlvf = txgbe_set_vlvf;
+ mac->ops.clear_vfta = txgbe_clear_vfta;
+ mac->ops.init_uta_tables = txgbe_init_uta_tables;
+ mac->ops.set_mac_anti_spoofing = txgbe_set_mac_anti_spoofing;
+ mac->ops.set_vlan_anti_spoofing = txgbe_set_vlan_anti_spoofing;
+ mac->ops.set_ethertype_anti_spoofing =
+ txgbe_set_ethertype_anti_spoofing;
+
+ /* Flow Control */
+ mac->ops.fc_enable = txgbe_fc_enable;
+ mac->ops.setup_fc = txgbe_setup_fc;
+
+ /* Link */
+ mac->ops.get_link_capabilities = txgbe_get_link_capabilities;
+ mac->ops.check_link = txgbe_check_mac_link;
+ mac->ops.setup_rxpba = txgbe_set_rxpba;
+ mac->mcft_size = TXGBE_SP_MC_TBL_SIZE;
+ mac->vft_size = TXGBE_SP_VFT_TBL_SIZE;
+ mac->num_rar_entries = TXGBE_SP_RAR_ENTRIES;
+ mac->rx_pb_size = TXGBE_SP_RX_PB_SIZE;
+ mac->max_rx_queues = TXGBE_SP_MAX_RX_QUEUES;
+ mac->max_tx_queues = TXGBE_SP_MAX_TX_QUEUES;
+ mac->max_msix_vectors = txgbe_get_pcie_msix_count(hw);
+
+ mac->arc_subsystem_valid = (rd32(hw, TXGBE_MIS_ST) &
+ TXGBE_MIS_ST_MNG_INIT_DN) ? true : false;
+
+ hw->mbx.ops.init_params = txgbe_init_mbx_params_pf;
+
+ /* EEPROM */
+ eeprom->ops.init_params = txgbe_init_eeprom_params;
+ eeprom->ops.calc_checksum = txgbe_calc_eeprom_checksum;
+ eeprom->ops.read = txgbe_read_ee_hostif;
+ eeprom->ops.read_buffer = txgbe_read_ee_hostif_buffer;
+ eeprom->ops.write = txgbe_write_ee_hostif;
+ eeprom->ops.write_buffer = txgbe_write_ee_hostif_buffer;
+ eeprom->ops.update_checksum = txgbe_update_eeprom_checksum;
+ eeprom->ops.validate_checksum = txgbe_validate_eeprom_checksum;
+
+ /* FLASH */
+ flash->ops.init_params = txgbe_init_flash_params;
+ flash->ops.read_buffer = txgbe_read_flash_buffer;
+ flash->ops.write_buffer = txgbe_write_flash_buffer;
+
+ /* Manageability interface */
+ mac->ops.set_fw_drv_ver = txgbe_set_fw_drv_ver;
+
+ mac->ops.get_thermal_sensor_data =
+ txgbe_get_thermal_sensor_data;
+ mac->ops.init_thermal_sensor_thresh =
+ txgbe_init_thermal_sensor_thresh;
+
+ return ret_val;
+}
+
+/**
+ * txgbe_get_link_capabilities - Determines link capabilities
+ * @hw: pointer to hardware structure
+ * @speed: pointer to link speed
+ * @autoneg: true when autoneg or autotry is enabled
+ *
+ * Determines the link capabilities by reading the AUTOC register.
+ **/
+s32 txgbe_get_link_capabilities(struct txgbe_hw *hw,
+ u32 *speed,
+ bool *autoneg)
+{
+ s32 status = 0;
+ u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl;
+ u32 sr_an_mmd_adv_reg2;
+
+ DEBUGFUNC("\n");
+
+ /* Check if 1G SFP module. */
+ if (hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1) {
+ *speed = TXGBE_LINK_SPEED_1GB_FULL;
+ *autoneg = false;
+ } else if (hw->phy.multispeed_fiber) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL |
+ TXGBE_LINK_SPEED_1GB_FULL;
+ *autoneg = true;
+ }
+ /* SFP */
+ else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ *autoneg = false;
+ }
+ /* XAUI */
+ else if ((txgbe_get_media_type(hw) == txgbe_media_type_copper) &&
+ ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI ||
+ (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_T;
+ }
+ /* SGMII */
+ else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) {
+ *speed = TXGBE_LINK_SPEED_1GB_FULL |
+ TXGBE_LINK_SPEED_100_FULL |
+ TXGBE_LINK_SPEED_10_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_T |
+ TXGBE_PHYSICAL_LAYER_100BASE_TX;
+ /* MAC XAUI */
+ } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4;
+ /* MAC SGMII */
+ } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+ *speed = TXGBE_LINK_SPEED_1GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+ }
+ /* KR KX KX4 */
+ else {
+ /*
+ * Determine link capabilities based on the stored value,
+ * which represents EEPROM defaults. If value has not
+ * been stored, use the current register values.
+ */
+ if (hw->mac.orig_link_settings_stored) {
+ sr_pcs_ctl = hw->mac.orig_sr_pcs_ctl2;
+ sr_pma_mmd_ctl1 = hw->mac.orig_sr_pma_mmd_ctl1;
+ sr_an_mmd_ctl = hw->mac.orig_sr_an_mmd_ctl;
+ sr_an_mmd_adv_reg2 = hw->mac.orig_sr_an_mmd_adv_reg2;
+ } else {
+ sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+ sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw,
+ TXGBE_SR_PMA_MMD_CTL1);
+ sr_an_mmd_ctl = txgbe_rd32_epcs(hw,
+ TXGBE_SR_AN_MMD_CTL);
+ sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw,
+ TXGBE_SR_AN_MMD_ADV_REG2);
+ }
+
+ if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+ TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X &&
+ (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK)
+ == TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G &&
+ (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+ /* 1G or KX - no backplane auto-negotiation */
+ *speed = TXGBE_LINK_SPEED_1GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+ } else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+ TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X &&
+ (sr_pma_mmd_ctl1 & TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK)
+ == TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G &&
+ (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4;
+ } else if ((sr_pcs_ctl & TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK) ==
+ TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R &&
+ (sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE) == 0) {
+ /* 10 GbE serial link (KR -no backplane auto-negotiation) */
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ *autoneg = false;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR;
+ } else if ((sr_an_mmd_ctl & TXGBE_SR_AN_MMD_CTL_ENABLE)) {
+ /* KX/KX4/KR backplane auto-negotiation enable */
+ *speed = TXGBE_LINK_SPEED_UNKNOWN;
+ if (sr_an_mmd_adv_reg2 &
+ TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR)
+ *speed |= TXGBE_LINK_SPEED_10GB_FULL;
+ if (sr_an_mmd_adv_reg2 &
+ TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4)
+ *speed |= TXGBE_LINK_SPEED_10GB_FULL;
+ if (sr_an_mmd_adv_reg2 &
+ TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX)
+ *speed |= TXGBE_LINK_SPEED_1GB_FULL;
+ *autoneg = true;
+ hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR |
+ TXGBE_PHYSICAL_LAYER_10GBASE_KX4 |
+ TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+ } else {
+ status = TXGBE_ERR_LINK_SETUP;
+ goto out;
+ }
+ }
+
+out:
+ return status;
+}
+
+/**
+ * txgbe_get_media_type - Get media type
+ * @hw: pointer to hardware structure
+ *
+ * Returns the media type (fiber, copper, backplane)
+ **/
+enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw)
+{
+ enum txgbe_media_type media_type;
+ u8 device_type = hw->subsystem_id & 0xF0;
+
+ DEBUGFUNC("\n");
+
+ /* Detect if there is a copper PHY attached. */
+ switch (hw->phy.type) {
+ case txgbe_phy_cu_unknown:
+ case txgbe_phy_tn:
+ media_type = txgbe_media_type_copper;
+ goto out;
+ default:
+ break;
+ }
+
+ switch (device_type) {
+ case TXGBE_ID_MAC_XAUI:
+ case TXGBE_ID_MAC_SGMII:
+ case TXGBE_ID_KR_KX_KX4:
+ /* Default device ID is mezzanine card KX/KX4 */
+ media_type = txgbe_media_type_backplane;
+ break;
+ case TXGBE_ID_SFP:
+ media_type = txgbe_media_type_fiber;
+ break;
+ case TXGBE_ID_XAUI:
+ case TXGBE_ID_SGMII:
+ media_type = txgbe_media_type_copper;
+ break;
+ case TXGBE_ID_SFI_XAUI:
+ if (hw->bus.lan_id == 0)
+ media_type = txgbe_media_type_fiber;
+ else
+ media_type = txgbe_media_type_copper;
+ break;
+ default:
+ media_type = txgbe_media_type_unknown;
+ break;
+ }
+out:
+ return media_type;
+}
+
+/**
+ * txgbe_stop_mac_link_on_d3 - Disables link on D3
+ * @hw: pointer to hardware structure
+ *
+ * Disables link during D3 power down sequence.
+ *
+ **/
+void txgbe_stop_mac_link_on_d3(struct txgbe_hw *hw)
+{
+ /* fix autoc2 */
+ UNREFERENCED_PARAMETER(hw);
+ return;
+}
+
+
+/**
+ * txgbe_disable_tx_laser_multispeed_fiber - Disable Tx laser
+ * @hw: pointer to hardware structure
+ *
+ * The base drivers may require better control over SFP+ module
+ * PHY states. This includes selectively shutting down the Tx
+ * laser on the PHY, effectively halting physical link.
+ **/
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+ u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR);
+
+ /* Blocked by MNG FW so bail */
+ txgbe_check_reset_blocked(hw);
+
+ /* Disable Tx laser; allow 100us to go dark per spec */
+ esdp_reg |= TXGBE_GPIO_DR_1 | TXGBE_GPIO_DR_0;
+ wr32(hw, TXGBE_GPIO_DR, esdp_reg);
+ TXGBE_WRITE_FLUSH(hw);
+ usec_delay(100);
+}
+
+/**
+ * txgbe_enable_tx_laser_multispeed_fiber - Enable Tx laser
+ * @hw: pointer to hardware structure
+ *
+ * The base drivers may require better control over SFP+ module
+ * PHY states. This includes selectively turning on the Tx
+ * laser on the PHY, effectively starting physical link.
+ **/
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+ /* Enable Tx laser; allow 100ms to light up */
+ wr32m(hw, TXGBE_GPIO_DR,
+ TXGBE_GPIO_DR_0 | TXGBE_GPIO_DR_1, 0);
+ TXGBE_WRITE_FLUSH(hw);
+ msec_delay(100);
+}
+
+/**
+ * txgbe_flap_tx_laser_multispeed_fiber - Flap Tx laser
+ * @hw: pointer to hardware structure
+ *
+ * When the driver changes the link speeds that it can support,
+ * it sets autotry_restart to true to indicate that we need to
+ * initiate a new autotry session with the link partner. To do
+ * so, we set the speed then disable and re-enable the Tx laser, to
+ * alert the link partner that it also needs to restart autotry on its
+ * end. This is consistent with true clause 37 autoneg, which also
+ * involves a loss of signal.
+ **/
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+ DEBUGFUNC("\n");
+
+ /* Blocked by MNG FW so bail */
+ txgbe_check_reset_blocked(hw);
+
+ if (hw->mac.autotry_restart) {
+ txgbe_disable_tx_laser_multispeed_fiber(hw);
+ txgbe_enable_tx_laser_multispeed_fiber(hw);
+ hw->mac.autotry_restart = false;
+ }
+}
+
+/**
+ * txgbe_set_hard_rate_select_speed - Set module link speed
+ * @hw: pointer to hardware structure
+ * @speed: link speed to set
+ *
+ * Set module link speed via RS0/RS1 rate select pins.
+ */
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw,
+ u32 speed)
+{
+ u32 esdp_reg = rd32(hw, TXGBE_GPIO_DR);
+
+ switch (speed) {
+ case TXGBE_LINK_SPEED_10GB_FULL:
+ esdp_reg |= TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4;
+ break;
+ case TXGBE_LINK_SPEED_1GB_FULL:
+ esdp_reg &= ~(TXGBE_GPIO_DR_5 | TXGBE_GPIO_DR_4);
+ break;
+ default:
+ DEBUGOUT("Invalid fixed module speed\n");
+ return;
+ }
+
+ wr32(hw, TXGBE_GPIO_DDR,
+ TXGBE_GPIO_DDR_5 | TXGBE_GPIO_DDR_4 |
+ TXGBE_GPIO_DDR_1 | TXGBE_GPIO_DDR_0);
+
+ wr32(hw, TXGBE_GPIO_DR, esdp_reg);
+
+ TXGBE_WRITE_FLUSH(hw);
+}
+
+s32 txgbe_enable_rx_adapter(struct txgbe_hw *hw)
+{
+ u32 value;
+
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL);
+ value |= 1 << 12;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value);
+
+ value = 0;
+ while (!(value >> 11)) {
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_AD_ACK);
+ msleep(1);
+ }
+
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL);
+ value &= ~(1 << 12);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value);
+
+ return 0;
+}
+
+s32 txgbe_set_sgmii_an37_ability(struct txgbe_hw *hw)
+{
+ u32 value;
+
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0x3002);
+ /* for sgmii + external phy, set to 0x0105 (mac sgmii mode) */
+ if ((hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII) {
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0105);
+ }
+ /* for sgmii direct link, set to 0x010c (phy sgmii mode) */
+ if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) {
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x010c);
+ }
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0200);
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_MII_MMD_CTL);
+ value = (value & ~0x1200) | (0x1 << 12) | (0x1 << 9);
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, value);
+ return 0;
+}
+
+
+s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg)
+{
+ u32 i;
+ s32 status = 0;
+ u32 value = 0;
+ struct txgbe_adapter *adapter = hw->back;
+
+ /* 1. Wait xpcs power-up good */
+ for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+ break;
+ msleep(10);
+ }
+ if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+ status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+ goto out;
+ }
+ e_dev_info("It is set to kr.\n");
+
+ txgbe_wr32_epcs(hw, 0x78001, 0x7);
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FC);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FC);
+
+ if (1) {
+ /* 2. Disable xpcs AN-73 */
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+
+#if 0
+ if (autoneg)
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+ else
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+#endif
+ txgbe_wr32_epcs(hw, 0x78003, 0x1);
+ if (!(adapter->backplane_an == 1)) {
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0000);
+ txgbe_wr32_epcs(hw, 0x78003, 0x0);
+ }
+
+ if (KR_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KR) {
+ e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ value = (0x1804 & ~0x3F3F);
+ value |= adapter->ffe_main << 8 | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+ value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ }
+
+ if (KR_AN73_PRESET == 1) {
+ txgbe_wr32_epcs(hw, 0x18037, 0x80);
+ }
+
+ if (KR_POLLING == 1) {
+ txgbe_wr32_epcs(hw, 0x18006, 0xffff);
+ txgbe_wr32_epcs(hw, 0x18008, 0xA697);
+ }
+
+ /* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register */
+ /* Bit[10:0](MPLLA_BANDWIDTH) = 11'd123 (default: 11'd16) */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+ TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR);
+
+ /* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register */
+ /* Bit[12:8](RX_VREF_CTRL) = 5'hF (default: 5'h11) */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0,
+ 0xCF00);
+
+ /* 5. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register */
+ /* Bit[15:8](VGA1/2_GAIN_0) = 8'h77, Bit[7:5](CTLE_POLE_0) = 3'h2
+ * Bit[4:0](CTLE_BOOST_0) = 4'hA
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0,
+ 0x774A);
+
+ /* 6. Set VR_MII_Gen5_12G_RX_GENCTRL3 Register */
+ /* Bit[2:0](LOS_TRSHLD_0) = 3'h4 (default: 3) */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3,
+ 0x0004);
+ /* 7. Initialize the mode by setting VR XS or PCS MMD Digital */
+ /* Control1 Register Bit[15](VR_RST) */
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1,
+ 0xA000);
+ /* wait phy initialization done */
+ for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw,
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+ break;
+ msleep(100);
+ }
+ if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+ status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+ goto out;
+ }
+ } else {
+ txgbe_wr32_epcs(hw, TXGBE_VR_AN_KR_MODE_CL,
+ 0x1);
+ }
+out:
+ return status;
+}
+
+s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg)
+{
+ u32 i;
+ s32 status = 0;
+ u32 value;
+ struct txgbe_adapter *adapter = hw->back;
+
+ /* check link status, if already set, skip setting it again */
+ if (hw->link_status == TXGBE_LINK_STATUS_KX4) {
+ goto out;
+ }
+ e_dev_info("It is set to kx4.\n");
+
+ /* 1. Wait xpcs power-up good */
+ for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+ break;
+ msleep(10);
+ }
+ if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+ status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+ goto out;
+ }
+
+ wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE,
+ ~TXGBE_MAC_TX_CFG_TE);
+
+ /* 2. Disable xpcs AN-73 */
+ if (!autoneg)
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+ else
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ /* Disable PHY MPLLA */
+ txgbe_wr32_ephy(hw, 0x4, 0x2501);
+ /* Reset rx lane0-3 clock */
+ txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+ txgbe_wr32_ephy(hw, 0x1105, 0x4001);
+ txgbe_wr32_ephy(hw, 0x1205, 0x4001);
+ txgbe_wr32_ephy(hw, 0x1305, 0x4001);
+ } else {
+ /* Disable PHY MPLLA for eth mode change(after ECO) */
+ txgbe_wr32_ephy(hw, 0x4, 0x250A);
+ TXGBE_WRITE_FLUSH(hw);
+ msleep(1);
+
+ /* Set the eth change_mode bit first in mis_rst register
+ * for corresponding LAN port
+ */
+ if (hw->bus.lan_id == 0)
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+ else
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+ }
+
+ /* Set SR PCS Control2 Register Bits[1:0] = 2'b01 PCS_TYPE_SEL: non KR */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2,
+ TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X);
+ /* Set SR PMA MMD Control1 Register Bit[13] = 1'b1 SS13: 10G speed */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+ TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G);
+
+ value = (0xf5f0 & ~0x7F0) | (0x5 << 8) | (0x7 << 5) | 0xF0;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+ if ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI)
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+ else
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00);
+
+ if (KX4_SET == 1 || adapter->ffe_set) {
+ e_dev_info("Set KX4 TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ value = (0x1804 & ~0x3F3F);
+ value |= adapter->ffe_main << 8 | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+ value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ } else {
+ value = (0x1804 & ~0x3F3F);
+ value |= 40 << 8 ;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+ value = (0x50 & ~0x7F) | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+
+ }
+ for (i = 0; i < 4; i++) {
+ if (i == 0)
+ value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6;
+ else
+ value = (0xff06 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+ }
+
+ value = 0x0 & ~0x7777;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+ value = (0x6db & ~0xFFF) | (0x1 << 9) | (0x1 << 6) | (0x1 << 3) | 0x1;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA */
+ /* Control 0 Register Bit[7:0] = 8'd40 MPLLA_MULTIPLIER */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+ TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER);
+ /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA */
+ /* Control 3 Register Bit[10:0] = 11'd86 MPLLA_BANDWIDTH */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+ TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Load 0 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_0 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0,
+ TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Load 1 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_1 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1,
+ TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Load 2 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_2 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2,
+ TXGBE_PHY_VCO_CAL_LD0_OTHER);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Load 3 Register Bit[12:0] = 13'd1360 VCO_LD_VAL_3 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3,
+ TXGBE_PHY_VCO_CAL_LD0_OTHER);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Reference 0 Register Bit[5:0] = 6'd34 VCO_REF_LD_0/1 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0,
+ 0x2222);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO */
+ /* Calibration Reference 1 Register Bit[5:0] = 6'd34 VCO_REF_LD_2/3 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1,
+ 0x2222);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE */
+ /* Enable Register Bit[7:0] = 8'd0 AFE_EN_0/3_1, DFE_EN_0/3_1 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE,
+ 0x0);
+
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx */
+ /* Equalization Control 4 Register Bit[3:0] = 4'd0 CONT_ADAPT_0/3_1 */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL,
+ 0x00F0);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate */
+ /* Control Register Bit[14:12], Bit[10:8], Bit[6:4], Bit[2:0],
+ * all rates to 3'b010 TX0/1/2/3_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL,
+ 0x2222);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate */
+ /* Control Register Bit[13:12], Bit[9:8], Bit[5:4], Bit[1:0],
+ * all rates to 2'b10 RX0/1/2/3_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL,
+ 0x2222);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General */
+ /* Control 2 Register Bit[15:8] = 2'b01 TX0/1/2/3_WIDTH: 10bits */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2,
+ 0x5500);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General */
+ /* Control 2 Register Bit[15:8] = 2'b01 RX0/1/2/3_WIDTH: 10bits */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2,
+ 0x5500);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+ * 2 Register Bit[10:8] = 3'b010
+ * MPLLA_DIV16P5_CLK_EN=0, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+ TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+
+ txgbe_wr32_epcs(hw, 0x1f0000, 0x0);
+ txgbe_wr32_epcs(hw, 0x1f8001, 0x0);
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_DIGI_CTL, 0x0);
+
+ if (KX4_TXRX_PIN == 1)
+ txgbe_wr32_epcs(hw, 0x38001, 0xff);
+ /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+ * Register Bit[15](VR_RST)
+ */
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+ /* wait phy initialization done */
+ for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+ break;
+ msleep(100);
+ }
+
+ /* if success, set link status */
+ hw->link_status = TXGBE_LINK_STATUS_KX4;
+
+ if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+ status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+ goto out;
+ }
+
+out:
+ return status;
+}
+
+
+s32 txgbe_set_link_to_kx(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg)
+{
+ u32 i;
+ s32 status = 0;
+ u32 wdata = 0;
+ u32 value;
+ struct txgbe_adapter *adapter = hw->back;
+
+ /* check link status, if already set, skip setting it again */
+ if (hw->link_status == TXGBE_LINK_STATUS_KX) {
+ goto out;
+ }
+ e_dev_info("It is set to kx. speed =0x%x\n", speed);
+
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FC);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FC);
+
+ /* 1. Wait xpcs power-up good */
+ for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+ break;
+ msleep(10);
+ }
+ if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+ status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+ goto out;
+ }
+
+ wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE,
+ ~TXGBE_MAC_TX_CFG_TE);
+
+ /* 2. Disable xpcs AN-73 */
+ if (!autoneg)
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+ else
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x3000);
+
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ /* Disable PHY MPLLA */
+ txgbe_wr32_ephy(hw, 0x4, 0x2401);
+ /* Reset rx lane0 clock */
+ txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+ } else {
+ /* Disable PHY MPLLA for eth mode change(after ECO) */
+ txgbe_wr32_ephy(hw, 0x4, 0x240A);
+ TXGBE_WRITE_FLUSH(hw);
+ msleep(1);
+
+ /* Set the eth change_mode bit first in mis_rst register */
+ /* for corresponding LAN port */
+ if (hw->bus.lan_id == 0)
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+ else
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+ }
+
+ /* Set SR PCS Control2 Register Bits[1:0] = 2'b01 PCS_TYPE_SEL: non KR */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2,
+ TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X);
+
+ /* Set SR PMA MMD Control1 Register Bit[13] = 1'b0 SS13: 1G speed */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1,
+ TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G);
+
+ /* Set SR MII MMD Control Register to corresponding speed: {Bit[6],
+ * Bit[13]}=[2'b00,2'b01,2'b10]->[10M,100M,1G]
+ */
+ if (speed == TXGBE_LINK_SPEED_100_FULL)
+ wdata = 0x2100;
+ else if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+ wdata = 0x0140;
+ else if (speed == TXGBE_LINK_SPEED_10_FULL)
+ wdata = 0x0100;
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL,
+ wdata);
+
+ value = (0xf5f0 & ~0x710) | (0x5 << 8)| 0x10;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+ if (KX_SGMII == 1)
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00);
+ else
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+
+ if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) {
+ e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | adapter->ffe_post | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ } else {
+ value = (0x1804 & ~0x3F3F) | (24 << 8) | 4;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+ value = (0x50 & ~0x7F) | 16 | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ }
+
+ for (i = 0; i < 4; i++) {
+ if (i) {
+ value = 0xff06;
+ } else {
+ value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6;
+ }
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+ }
+
+ value = 0x0 & ~0x7;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+ value = (0x6db & ~0x7) | 0x4;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+ * 0 Register Bit[7:0] = 8'd32 MPLLA_MULTIPLIER
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+ TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX);
+
+ /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control 3
+ * Register Bit[10:0] = 11'd70 MPLLA_BANDWIDTH
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+ TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+ * Calibration Load 0 Register Bit[12:0] = 13'd1344 VCO_LD_VAL_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0,
+ TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX);
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, 0x549);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, 0x549);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, 0x549);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+ * Calibration Reference 0 Register Bit[5:0] = 6'd42 VCO_REF_LD_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0,
+ TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX);
+
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2929);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE Enable
+ * Register Bit[4], Bit[0] = 1'b0 AFE_EN_0, DFE_EN_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE,
+ 0x0);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+ * Equalization Control 4 Register Bit[0] = 1'b0 CONT_ADAPT_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL,
+ 0x0010);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+ * Control Register Bit[2:0] = 3'b011 TX0_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL,
+ TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+ * Control Register Bit[2:0] = 3'b011 RX0_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL,
+ TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX);
+
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General
+ * Control 2 Register Bit[9:8] = 2'b01 TX0_WIDTH: 10bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2,
+ TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General
+ * Control 2 Register Bit[9:8] = 2'b01 RX0_WIDTH: 10bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2,
+ TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+ * 2 Register Bit[10:8] = 3'b010 MPLLA_DIV16P5_CLK_EN=0,
+ * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+ TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+ /* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */
+ /* Set to 8bit MII (required in 10M/100M SGMII) */
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL,
+ 0x0100);
+
+ /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+ * Register Bit[15](VR_RST)
+ */
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+ /* wait phy initialization done */
+ for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+ break;
+ msleep(100);
+ }
+
+ /* if success, set link status */
+ hw->link_status = TXGBE_LINK_STATUS_KX;
+
+ if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+ status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+ goto out;
+ }
+
+out:
+ return status;
+}
+
+s32 txgbe_set_link_to_sfi(struct txgbe_hw *hw,
+ u32 speed)
+{
+ u32 i;
+ s32 status = 0;
+ u32 value = 0;
+ struct txgbe_adapter *adapter = hw->back;
+
+ /* Set the module link speed */
+ TCALL(hw, mac.ops.set_rate_select_speed,
+ speed);
+
+ e_dev_info("It is set to sfi.\n");
+ /* 1. Wait xpcs power-up good */
+ for (i = 0; i < TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+ break;
+ msleep(10);
+ }
+ if (i == TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME) {
+ status = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+ goto out;
+ }
+
+ wr32m(hw, TXGBE_MAC_TX_CFG, TXGBE_MAC_TX_CFG_TE,
+ ~TXGBE_MAC_TX_CFG_TE);
+
+ /* 2. Disable xpcs AN-73 */
+ txgbe_wr32_epcs(hw, TXGBE_SR_AN_MMD_CTL, 0x0);
+
+ if (hw->revision_id != TXGBE_SP_MPW) {
+ /* Disable PHY MPLLA for eth mode change(after ECO) */
+ txgbe_wr32_ephy(hw, 0x4, 0x243A);
+ TXGBE_WRITE_FLUSH(hw);
+ msleep(1);
+ /* Set the eth change_mode bit first in mis_rst register
+ * for corresponding LAN port
+ */
+ if (hw->bus.lan_id == 0)
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN0_CHG_ETH_MODE);
+ else
+ wr32(hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_LAN1_CHG_ETH_MODE);
+ }
+ if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+ /* @. Set SR PCS Control2 Register Bits[1:0] = 2'b00 PCS_TYPE_SEL: KR */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0);
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+ value = value | 0x2000;
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, value);
+ /* @. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL0 Register Bit[7:0] = 8'd33
+ * MPLLA_MULTIPLIER
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0021);
+ /* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register
+ * Bit[10:0](MPLLA_BANDWIDTH) = 11'd0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0);
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+ value = (value & ~0x700) | 0x500;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+ /* 4.Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register Bit[12:8](RX_VREF_CTRL)
+ * = 5'hF
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+ /* @. Set VR_XS_PMA_Gen5_12G_VCO_CAL_LD0 Register Bit[12:0] = 13'd1353
+ * VCO_LD_VAL_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0549);
+ /* @. Set VR_XS_PMA_Gen5_12G_VCO_CAL_REF0 Register Bit[5:0] = 6'd41
+ * VCO_REF_LD_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x0029);
+ /* @. Set VR_XS_PMA_Gen5_12G_TX_RATE_CTRL Register Bit[2:0] = 3'b000
+ * TX0_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0);
+ /* @. Set VR_XS_PMA_Gen5_12G_RX_RATE_CTRL Register Bit[2:0] = 3'b000
+ * RX0_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0);
+ /* @. Set VR_XS_PMA_Gen5_12G_TX_GENCTRL2 Register Bit[9:8] = 2'b11
+ * TX0_WIDTH: 20bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0300);
+ /* @. Set VR_XS_PMA_Gen5_12G_RX_GENCTRL2 Register Bit[9:8] = 2'b11
+ * RX0_WIDTH: 20bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0300);
+ /* @. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL2 Register Bit[10:8] = 3'b110
+ * MPLLA_DIV16P5_CLK_EN=1, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0600);
+ if (SFI_SET == 1 || adapter->ffe_set) {
+ e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | adapter->ffe_post | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ } else {
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (24 << 8) | 4;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | 16 | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ }
+ if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+ /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+ * Bit[15:8](VGA1/2_GAIN_0) = 8'h77, Bit[7:5]
+ * (CTLE_POLE_0) = 3'h2, Bit[4:0](CTLE_BOOST_0) = 4'hF
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+
+ } else {
+ /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register Bit[15:8]
+ * (VGA1/2_GAIN_0) = 8'h00, Bit[7:5](CTLE_POLE_0) = 3'h2,
+ * Bit[4:0](CTLE_BOOST_0) = 4'hA
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+ value = (value & ~0xFFFF) | (2 << 5) | 0x05;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+ }
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+ value = (value & ~0x7) | 0x0;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+ if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+ /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0)
+ * = 8'd20
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0014);
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+ value = (value & ~0x11) | 0x11;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+ } else {
+ /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0)
+ * = 8'd20
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0xBE);
+ /* 9. Set VR_MII_Gen5_12G_AFE_DFE_EN_CTRL Register Bit[4](DFE_EN_0) =
+ * 1'b0, Bit[0](AFE_EN_0) = 1'b0
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+ value = (value & ~0x11) | 0x0;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+ }
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL);
+ value = value & ~0x1;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value);
+ } else {
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ /* Disable PHY MPLLA */
+ txgbe_wr32_ephy(hw, 0x4, 0x2401);
+ /* Reset rx lane0 clock */
+ txgbe_wr32_ephy(hw, 0x1005, 0x4001);
+ }
+ /* @. Set SR PCS Control2 Register Bits[1:0] = 2'b00 PCS_TYPE_SEL: KR */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PCS_CTL2, 0x1);
+ /* Set SR PMA MMD Control1 Register Bit[13] = 1'b0 SS13: 1G speed */
+ txgbe_wr32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1, 0x0000);
+ /* Set SR MII MMD Control Register to corresponding speed: */
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_CTL, 0x0140);
+
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+ value = (value & ~0x710) | 0x500;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+ /* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register Bit[12:8](RX_VREF_CTRL)
+ * = 5'hF
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (24 << 8) | 4;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | 16 | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+ } else {
+ /* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register Bit[15:8]
+ * (VGA1/2_GAIN_0) = 8'h00, Bit[7:5](CTLE_POLE_0) = 3'h2,
+ * Bit[4:0](CTLE_BOOST_0) = 4'hA
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+ value = (value & ~0xFFFF) | 0x7706;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+ }
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+ value = (value & ~0x7) | 0x0;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+ /* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register Bit[7:0](DFE_TAP1_0)
+ * = 8'd00
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+ /* Set VR_XS_PMA_Gen5_12G_RX_GENCTRL3 Register Bit[2:0] LOS_TRSHLD_0 = 4 */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3);
+ value = (value & ~0x7) | 0x4;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY
+ * MPLLA Control 0 Register Bit[7:0] = 8'd32 MPLLA_MULTIPLIER
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0020);
+ /* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+ * 3 Register Bit[10:0] = 11'd70 MPLLA_BANDWIDTH
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0x0046);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+ * Calibration Load 0 Register Bit[12:0] = 13'd1344 VCO_LD_VAL_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0540);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+ * Calibration Reference 0 Register Bit[5:0] = 6'd42 VCO_REF_LD_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x002A);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE
+ * Enable Register Bit[4], Bit[0] = 1'b0 AFE_EN_0, DFE_EN_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+ * Equalization Control 4 Register Bit[0] = 1'b0 CONT_ADAPT_0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x0010);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+ * Control Register Bit[2:0] = 3'b011 TX0_RATE
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x0003);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+ * Control Register Bit[2:0] = 3'b011
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x0003);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General
+ * Control 2 Register Bit[9:8] = 2'b01 TX0_WIDTH: 10bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0100);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General
+ * Control 2 Register Bit[9:8] = 2'b01 RX0_WIDTH: 10bits
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0100);
+ /* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA
+ * Control 2 Register Bit[10:8] = 3'b010 MPLLA_DIV16P5_CLK_EN=0,
+ * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+ */
+ txgbe_wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0200);
+ /* VR MII MMD AN Control Register Bit[8] = 1'b1 MII_CTRL */
+ txgbe_wr32_epcs(hw, TXGBE_SR_MII_MMD_AN_CTL, 0x0100);
+ }
+ /* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+ * Register Bit[15](VR_RST)
+ */
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+ /* wait phy initialization done */
+ for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+ break;
+ msleep(100);
+ }
+ if (i == TXGBE_PHY_INIT_DONE_POLLING_TIME) {
+ status = TXGBE_ERR_PHY_INIT_NOT_DONE;
+ goto out;
+ }
+
+out:
+ return status;
+}
+
+
+/**
+ * txgbe_setup_mac_link - Set MAC link speed
+ * @hw: pointer to hardware structure
+ * @speed: new link speed
+ * @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ * Set the link speed in the AUTOC register and restarts link.
+ **/
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ bool autoneg = false;
+ s32 status = 0;
+ u32 link_capabilities = TXGBE_LINK_SPEED_UNKNOWN;
+ struct txgbe_adapter *adapter = hw->back;
+ u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+ bool link_up = false;
+
+ UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+ DEBUGFUNC("\n");
+
+ /* Check to see if speed passed in is supported. */
+ status = TCALL(hw, mac.ops.get_link_capabilities,
+ &link_capabilities, &autoneg);
+ if (status)
+ goto out;
+
+ speed &= link_capabilities;
+
+ if (speed == TXGBE_LINK_SPEED_UNKNOWN) {
+ status = TXGBE_ERR_LINK_SETUP;
+ goto out;
+ }
+
+ if (!(((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4) ||
+ ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_XAUI) ||
+ ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_MAC_SGMII))) {
+ status = TCALL(hw, mac.ops.check_link,
+ &link_speed, &link_up, false);
+ if (status != 0)
+ goto out;
+ if ((link_speed == speed) && link_up)
+ goto out;
+ }
+
+ if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)
+ goto out;
+
+ if ((hw->subsystem_id & 0xF0) == TXGBE_ID_KR_KX_KX4) {
+ if (!autoneg) {
+ switch (hw->phy.link_mode) {
+ case TXGBE_PHYSICAL_LAYER_10GBASE_KR:
+ txgbe_set_link_to_kr(hw, autoneg);
+ break;
+ case TXGBE_PHYSICAL_LAYER_10GBASE_KX4:
+ txgbe_set_link_to_kx4(hw, autoneg);
+ break;
+ case TXGBE_PHYSICAL_LAYER_1000BASE_KX:
+ txgbe_set_link_to_kx(hw, speed, autoneg);
+ break;
+ default:
+ status = TXGBE_ERR_PHY;
+ goto out;
+ }
+ } else {
+ txgbe_set_link_to_kr(hw, autoneg);
+ }
+ } else if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI ||
+ ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_XAUI) ||
+ (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII ||
+ ((hw->subsystem_id & 0xF0) == TXGBE_ID_MAC_SGMII) ||
+ (txgbe_get_media_type(hw) == txgbe_media_type_copper &&
+ (hw->subsystem_id & 0xF0) == TXGBE_ID_SFI_XAUI)) {
+ if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+ txgbe_set_link_to_kx4(hw, autoneg);
+ } else {
+ txgbe_set_link_to_kx(hw, speed, 0);
+ if (adapter->an37 ||
+ (hw->subsystem_id & 0xF0) == TXGBE_ID_SGMII ||
+ (hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI)
+ txgbe_set_sgmii_an37_ability(hw);
+ }
+ } else if (txgbe_get_media_type(hw) == txgbe_media_type_fiber) {
+ txgbe_set_link_to_sfi(hw, speed);
+ }
+
+out:
+ return status;
+}
+
+/**
+ * txgbe_setup_copper_link - Set the PHY autoneg advertised field
+ * @hw: pointer to hardware structure
+ * @speed: new link speed
+ * @autoneg_wait_to_complete: true if waiting is needed to complete
+ *
+ * Restarts link on PHY and MAC based on settings passed in.
+ **/
+STATIC s32 txgbe_setup_copper_link(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ s32 status;
+ u32 link_speed;
+
+ DEBUGFUNC("\n");
+
+ /* Setup the PHY according to input speed */
+ link_speed = TCALL(hw, phy.ops.setup_link_speed, speed,
+ autoneg_wait_to_complete);
+
+ if (link_speed != TXGBE_LINK_SPEED_UNKNOWN)
+ /* Set up MAC */
+ status = txgbe_setup_mac_link(hw, link_speed, autoneg_wait_to_complete);
+ else {
+ status = 0;
+ }
+ return status;
+}
+
+int txgbe_reset_misc(struct txgbe_hw *hw)
+{
+ int i;
+ u32 value;
+
+ txgbe_init_i2c(hw);
+
+ value = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+ if ((value & 0x3) != TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X) {
+ hw->link_status = TXGBE_LINK_STATUS_NONE;
+ }
+
+ /* receive packets that size > 2048 */
+ wr32m(hw, TXGBE_MAC_RX_CFG,
+ TXGBE_MAC_RX_CFG_JE, TXGBE_MAC_RX_CFG_JE);
+
+ /* clear counters on read */
+ wr32m(hw, TXGBE_MMC_CONTROL,
+ TXGBE_MMC_CONTROL_RSTONRD, TXGBE_MMC_CONTROL_RSTONRD);
+
+ wr32m(hw, TXGBE_MAC_RX_FLOW_CTRL,
+ TXGBE_MAC_RX_FLOW_CTRL_RFE, TXGBE_MAC_RX_FLOW_CTRL_RFE);
+
+ wr32(hw, TXGBE_MAC_PKT_FLT,
+ TXGBE_MAC_PKT_FLT_PR);
+
+ wr32m(hw, TXGBE_MIS_RST_ST,
+ TXGBE_MIS_RST_ST_RST_INIT, 0x1E00);
+
+ /* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+ wr32(hw, TXGBE_PSR_MNG_FLEX_SEL, 0);
+ for (i = 0; i < 16; i++) {
+ wr32(hw, TXGBE_PSR_MNG_FLEX_DW_L(i), 0);
+ wr32(hw, TXGBE_PSR_MNG_FLEX_DW_H(i), 0);
+ wr32(hw, TXGBE_PSR_MNG_FLEX_MSK(i), 0);
+ }
+ wr32(hw, TXGBE_PSR_LAN_FLEX_SEL, 0);
+ for (i = 0; i < 16; i++) {
+ wr32(hw, TXGBE_PSR_LAN_FLEX_DW_L(i), 0);
+ wr32(hw, TXGBE_PSR_LAN_FLEX_DW_H(i), 0);
+ wr32(hw, TXGBE_PSR_LAN_FLEX_MSK(i), 0);
+ }
+
+ /* set pause frame dst mac addr */
+ wr32(hw, TXGBE_RDB_PFCMACDAL, 0xC2000001);
+ wr32(hw, TXGBE_RDB_PFCMACDAH, 0x0180);
+
+ txgbe_init_thermal_sensor_thresh(hw);
+
+ return 0;
+}
+
+/**
+ * txgbe_reset_hw - Perform hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * Resets the hardware by resetting the transmit and receive units, masks
+ * and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ * reset.
+ **/
+s32 txgbe_reset_hw(struct txgbe_hw *hw)
+{
+ s32 status;
+ u32 reset = 0;
+ u32 i;
+
+ u32 sr_pcs_ctl, sr_pma_mmd_ctl1, sr_an_mmd_ctl, sr_an_mmd_adv_reg2;
+ u32 vr_xs_or_pcs_mmd_digi_ctl1, curr_vr_xs_or_pcs_mmd_digi_ctl1;
+ u32 curr_sr_pcs_ctl, curr_sr_pma_mmd_ctl1;
+ u32 curr_sr_an_mmd_ctl, curr_sr_an_mmd_adv_reg2;
+
+ u32 reset_status = 0;
+ u32 rst_delay = 0;
+ struct txgbe_adapter *adapter = hw->back;
+ u32 value;
+
+ DEBUGFUNC("\n");
+
+ /* Call adapter stop to disable tx/rx and clear interrupts */
+ status = TCALL(hw, mac.ops.stop_adapter);
+ if (status != 0)
+ goto reset_hw_out;
+
+ /* Identify PHY and related function pointers */
+ status = TCALL(hw, phy.ops.init);
+
+ if (status == TXGBE_ERR_SFP_NOT_SUPPORTED)
+ goto reset_hw_out;
+
+ /* Reset PHY */
+ if (txgbe_get_media_type(hw) == txgbe_media_type_copper)
+ TCALL(hw, phy.ops.reset);
+
+ /* remember internel phy regs from before we reset */
+ curr_sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+ curr_sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+ curr_sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL);
+ curr_sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw,
+ TXGBE_SR_AN_MMD_ADV_REG2);
+ curr_vr_xs_or_pcs_mmd_digi_ctl1 =
+ txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1);
+
+ /*
+ * Issue global reset to the MAC. Needs to be SW reset if link is up.
+ * If link reset is used when link is up, it might reset the PHY when
+ * mng is using it. If link is down or the flag to force full link
+ * reset is set, then perform link reset.
+ */
+ if (hw->force_full_reset) {
+ rst_delay = (rd32(hw, TXGBE_MIS_RST_ST) &
+ TXGBE_MIS_RST_ST_RST_INIT) >>
+ TXGBE_MIS_RST_ST_RST_INI_SHIFT;
+ if (hw->reset_type == TXGBE_SW_RESET) {
+ for (i = 0; i < rst_delay + 20; i++) {
+ reset_status =
+ rd32(hw, TXGBE_MIS_RST_ST);
+ if (!(reset_status &
+ TXGBE_MIS_RST_ST_DEV_RST_ST_MASK))
+ break;
+ msleep(100);
+ }
+
+ if (reset_status & TXGBE_MIS_RST_ST_DEV_RST_ST_MASK) {
+ status = TXGBE_ERR_RESET_FAILED;
+ DEBUGOUT("Global reset polling failed to "
+ "complete.\n");
+ goto reset_hw_out;
+ }
+ status = txgbe_check_flash_load(hw,
+ TXGBE_SPI_ILDR_STATUS_SW_RESET);
+ if (status != 0)
+ goto reset_hw_out;
+ /* errata 7 */
+ if (txgbe_mng_present(hw) &&
+ hw->revision_id == TXGBE_SP_MPW) {
+ struct txgbe_adapter *adapter =
+ (struct txgbe_adapter *)hw->back;
+ adapter->flags2 &=
+ ~TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
+ }
+ } else if (hw->reset_type == TXGBE_GLOBAL_RESET) {
+#ifndef _WIN32
+ struct txgbe_adapter *adapter =
+ (struct txgbe_adapter *)hw->back;
+ msleep(100 * rst_delay + 2000);
+ pci_restore_state(adapter->pdev);
+ pci_save_state(adapter->pdev);
+ pci_wake_from_d3(adapter->pdev, false);
+#endif /*_WIN32*/
+ }
+ } else {
+ if (txgbe_mng_present(hw)) {
+ if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+ ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+ txgbe_reset_hostif(hw);
+ }
+ } else {
+
+ if (hw->bus.lan_id == 0) {
+ reset = TXGBE_MIS_RST_LAN0_RST;
+ } else {
+ reset = TXGBE_MIS_RST_LAN1_RST;
+ }
+
+ wr32(hw, TXGBE_MIS_RST,
+ reset | rd32(hw, TXGBE_MIS_RST));
+ TXGBE_WRITE_FLUSH(hw);
+ }
+ usec_delay(10);
+
+ if (hw->bus.lan_id == 0) {
+ status = txgbe_check_flash_load(hw,
+ TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST);
+ } else {
+ status = txgbe_check_flash_load(hw,
+ TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST);
+ }
+ if (status != 0)
+ goto reset_hw_out;
+ }
+
+ status = txgbe_reset_misc(hw);
+ if (status != 0)
+ goto reset_hw_out;
+
+ /*
+ * Store the original AUTOC/AUTOC2 values if they have not been
+ * stored off yet. Otherwise restore the stored original
+ * values since the reset operation sets back to defaults.
+ */
+ sr_pcs_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_PCS_CTL2);
+ sr_pma_mmd_ctl1 = txgbe_rd32_epcs(hw, TXGBE_SR_PMA_MMD_CTL1);
+ sr_an_mmd_ctl = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_CTL);
+ sr_an_mmd_adv_reg2 = txgbe_rd32_epcs(hw, TXGBE_SR_AN_MMD_ADV_REG2);
+ vr_xs_or_pcs_mmd_digi_ctl1 =
+ txgbe_rd32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1);
+
+ if (hw->mac.orig_link_settings_stored == false) {
+ hw->mac.orig_sr_pcs_ctl2 = sr_pcs_ctl;
+ hw->mac.orig_sr_pma_mmd_ctl1 = sr_pma_mmd_ctl1;
+ hw->mac.orig_sr_an_mmd_ctl = sr_an_mmd_ctl;
+ hw->mac.orig_sr_an_mmd_adv_reg2 = sr_an_mmd_adv_reg2;
+ hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 =
+ vr_xs_or_pcs_mmd_digi_ctl1;
+ hw->mac.orig_link_settings_stored = true;
+ } else {
+
+ /* If MNG FW is running on a multi-speed device that
+ * doesn't autoneg with out driver support we need to
+ * leave LMS in the state it was before we MAC reset.
+ * Likewise if we support WoL we don't want change the
+ * LMS state.
+ */
+
+ hw->mac.orig_sr_pcs_ctl2 = curr_sr_pcs_ctl;
+ hw->mac.orig_sr_pma_mmd_ctl1 = curr_sr_pma_mmd_ctl1;
+ hw->mac.orig_sr_an_mmd_ctl = curr_sr_an_mmd_ctl;
+ hw->mac.orig_sr_an_mmd_adv_reg2 =
+ curr_sr_an_mmd_adv_reg2;
+ hw->mac.orig_vr_xs_or_pcs_mmd_digi_ctl1 =
+ curr_vr_xs_or_pcs_mmd_digi_ctl1;
+
+ }
+
+ /*A temporary solution for set to sfi*/
+ if (SFI_SET == 1 || adapter->ffe_set == TXGBE_BP_M_SFI) {
+ e_dev_info("Set SFI TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | adapter->ffe_post | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ }
+
+ if (KR_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KR) {
+ e_dev_info("Set KR TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ value = (0x1804 & ~0x3F3F);
+ value |= adapter->ffe_main << 8 | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+ value = (0x50 & ~0x7F) | (1 << 6)| adapter->ffe_post;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FF);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FF);
+ }
+
+ if (KX_SET == 1 || adapter->ffe_set == TXGBE_BP_M_KX) {
+ e_dev_info("Set KX TX_EQ MAIN:%d PRE:%d POST:%d\n",
+ adapter->ffe_main, adapter->ffe_pre, adapter->ffe_post);
+ /* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register Bit[13:8](TX_EQ_MAIN)
+ * = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+ value = (value & ~0x3F3F) | (adapter->ffe_main << 8) | adapter->ffe_pre;
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+ /* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6](TX_EQ_OVR_RIDE)
+ * = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+ */
+ value = txgbe_rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+ value = (value & ~0x7F) | adapter->ffe_post | (1 << 6);
+ txgbe_wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+
+ txgbe_wr32_epcs(hw, 0x18035, 0x00FF);
+ txgbe_wr32_epcs(hw, 0x18055, 0x00FF);
+ }
+
+ /* Store the permanent mac address */
+ TCALL(hw, mac.ops.get_mac_addr, hw->mac.perm_addr);
+
+ /*
+ * Store MAC address from RAR0, clear receive address registers, and
+ * clear the multicast table. Also reset num_rar_entries to 128,
+ * since we modify this value when programming the SAN MAC address.
+ */
+ hw->mac.num_rar_entries = 128;
+ TCALL(hw, mac.ops.init_rx_addrs);
+
+ /* Store the permanent SAN mac address */
+ TCALL(hw, mac.ops.get_san_mac_addr, hw->mac.san_addr);
+
+ /* Add the SAN MAC address to the RAR only if it's a valid address */
+ if (txgbe_validate_mac_addr(hw->mac.san_addr) == 0) {
+ TCALL(hw, mac.ops.set_rar, hw->mac.num_rar_entries - 1,
+ hw->mac.san_addr, 0, TXGBE_PSR_MAC_SWC_AD_H_AV);
+
+ /* Save the SAN MAC RAR index */
+ hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1;
+
+ /* Reserve the last RAR for the SAN MAC address */
+ hw->mac.num_rar_entries--;
+ }
+
+ /* Store the alternative WWNN/WWPN prefix */
+ TCALL(hw, mac.ops.get_wwn_prefix, &hw->mac.wwnn_prefix,
+ &hw->mac.wwpn_prefix);
+
+ pci_set_master(((struct txgbe_adapter *)hw->back)->pdev);
+
+reset_hw_out:
+ return status;
+}
+
+/**
+ * txgbe_fdir_check_cmd_complete - poll to check whether FDIRCMD is complete
+ * @hw: pointer to hardware structure
+ * @fdircmd: current value of FDIRCMD register
+ */
+STATIC s32 txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, u32 *fdircmd)
+{
+ int i;
+
+ for (i = 0; i < TXGBE_RDB_FDIR_CMD_CMD_POLL; i++) {
+ *fdircmd = rd32(hw, TXGBE_RDB_FDIR_CMD);
+ if (!(*fdircmd & TXGBE_RDB_FDIR_CMD_CMD_MASK))
+ return 0;
+ usec_delay(10);
+ }
+
+ return TXGBE_ERR_FDIR_CMD_INCOMPLETE;
+}
+
+/**
+ * txgbe_reinit_fdir_tables - Reinitialize Flow Director tables.
+ * @hw: pointer to hardware structure
+ **/
+s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw)
+{
+ s32 err;
+ int i;
+ u32 fdirctrl = rd32(hw, TXGBE_RDB_FDIR_CTL);
+ u32 fdircmd;
+ fdirctrl &= ~TXGBE_RDB_FDIR_CTL_INIT_DONE;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Before starting reinitialization process,
+ * FDIRCMD.CMD must be zero.
+ */
+ err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+ if (err) {
+ DEBUGOUT("Flow Director previous command did not complete, "
+ "aborting table re-initialization.\n");
+ return err;
+ }
+
+ wr32(hw, TXGBE_RDB_FDIR_FREE, 0);
+ TXGBE_WRITE_FLUSH(hw);
+ /*
+ * sapphire adapters flow director init flow cannot be restarted,
+ * Workaround sapphire silicon errata by performing the following steps
+ * before re-writing the FDIRCTRL control register with the same value.
+ * - write 1 to bit 8 of FDIRCMD register &
+ * - write 0 to bit 8 of FDIRCMD register
+ */
+ wr32m(hw, TXGBE_RDB_FDIR_CMD,
+ TXGBE_RDB_FDIR_CMD_CLEARHT, TXGBE_RDB_FDIR_CMD_CLEARHT);
+ TXGBE_WRITE_FLUSH(hw);
+ wr32m(hw, TXGBE_RDB_FDIR_CMD,
+ TXGBE_RDB_FDIR_CMD_CLEARHT, 0);
+ TXGBE_WRITE_FLUSH(hw);
+ /*
+ * Clear FDIR Hash register to clear any leftover hashes
+ * waiting to be programmed.
+ */
+ wr32(hw, TXGBE_RDB_FDIR_HASH, 0x00);
+ TXGBE_WRITE_FLUSH(hw);
+
+ wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl);
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* Poll init-done after we write FDIRCTRL register */
+ for (i = 0; i < TXGBE_FDIR_INIT_DONE_POLL; i++) {
+ if (rd32(hw, TXGBE_RDB_FDIR_CTL) &
+ TXGBE_RDB_FDIR_CTL_INIT_DONE)
+ break;
+ msec_delay(1);
+ }
+ if (i >= TXGBE_FDIR_INIT_DONE_POLL) {
+ DEBUGOUT("Flow Director Signature poll time exceeded!\n");
+ return TXGBE_ERR_FDIR_REINIT_FAILED;
+ }
+
+ /* Clear FDIR statistics registers (read to clear) */
+ rd32(hw, TXGBE_RDB_FDIR_USE_ST);
+ rd32(hw, TXGBE_RDB_FDIR_FAIL_ST);
+ rd32(hw, TXGBE_RDB_FDIR_MATCH);
+ rd32(hw, TXGBE_RDB_FDIR_MISS);
+ rd32(hw, TXGBE_RDB_FDIR_LEN);
+
+ return 0;
+}
+
+/**
+ * txgbe_fdir_enable - Initialize Flow Director control registers
+ * @hw: pointer to hardware structure
+ * @fdirctrl: value to write to flow director control register
+ **/
+STATIC void txgbe_fdir_enable(struct txgbe_hw *hw, u32 fdirctrl)
+{
+ int i;
+
+ DEBUGFUNC("\n");
+
+ /* Prime the keys for hashing */
+ wr32(hw, TXGBE_RDB_FDIR_HKEY, TXGBE_ATR_BUCKET_HASH_KEY);
+ wr32(hw, TXGBE_RDB_FDIR_SKEY, TXGBE_ATR_SIGNATURE_HASH_KEY);
+
+ /*
+ * Poll init-done after we write the register. Estimated times:
+ * 10G: PBALLOC = 11b, timing is 60us
+ * 1G: PBALLOC = 11b, timing is 600us
+ * 100M: PBALLOC = 11b, timing is 6ms
+ *
+ * Multiple these timings by 4 if under full Rx load
+ *
+ * So we'll poll for TXGBE_FDIR_INIT_DONE_POLL times, sleeping for
+ * 1 msec per poll time. If we're at line rate and drop to 100M, then
+ * this might not finish in our poll time, but we can live with that
+ * for now.
+ */
+ wr32(hw, TXGBE_RDB_FDIR_CTL, fdirctrl);
+ TXGBE_WRITE_FLUSH(hw);
+ for (i = 0; i < TXGBE_RDB_FDIR_INIT_DONE_POLL; i++) {
+ if (rd32(hw, TXGBE_RDB_FDIR_CTL) &
+ TXGBE_RDB_FDIR_CTL_INIT_DONE)
+ break;
+ msec_delay(1);
+ }
+
+ if (i >= TXGBE_RDB_FDIR_INIT_DONE_POLL)
+ DEBUGOUT("Flow Director poll time exceeded!\n");
+}
+
+/**
+ * txgbe_init_fdir_signature -Initialize Flow Director sig filters
+ * @hw: pointer to hardware structure
+ * @fdirctrl: value to write to flow director control register, initially
+ * contains just the value of the Rx packet buffer allocation
+ **/
+s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl)
+{
+ struct txgbe_adapter *adapter = (struct txgbe_adapter *)hw->back;
+ int i = VMDQ_P(0) / 4;
+ int j = VMDQ_P(0) % 4;
+ u32 flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(i),
+ ~((TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK |
+ TXGBE_RDB_FDIR_FLEX_CFG_MSK |
+ TXGBE_RDB_FDIR_FLEX_CFG_OFST) <<
+ (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j)));
+
+ UNREFERENCED_PARAMETER(adapter);
+
+ flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC |
+ 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT) <<
+ (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j);
+ wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), flex);
+
+ /*
+ * Continue setup of fdirctrl register bits:
+ * Move the flexible bytes to use the ethertype - shift 6 words
+ * Set the maximum length per hash bucket to 0xA filters
+ * Send interrupt when 64 filters are left
+ */
+ fdirctrl |= (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) |
+ (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) |
+ (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT);
+
+ /* write hashes and fdirctrl register, poll for completion */
+ txgbe_fdir_enable(hw, fdirctrl);
+
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ /* errata 1: disable RSC of drop ring 0 */
+ wr32m(hw, TXGBE_PX_RR_CFG(0),
+ TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC);
+ }
+ return 0;
+}
+
+/**
+ * txgbe_init_fdir_perfect - Initialize Flow Director perfect filters
+ * @hw: pointer to hardware structure
+ * @fdirctrl: value to write to flow director control register, initially
+ * contains just the value of the Rx packet buffer allocation
+ * @cloud_mode: true - cloud mode, false - other mode
+ **/
+s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl,
+ bool cloud_mode)
+{
+ UNREFERENCED_PARAMETER(cloud_mode);
+ DEBUGFUNC("\n");
+
+ /*
+ * Continue setup of fdirctrl register bits:
+ * Turn perfect match filtering on
+ * Report hash in RSS field of Rx wb descriptor
+ * Initialize the drop queue
+ * Move the flexible bytes to use the ethertype - shift 6 words
+ * Set the maximum length per hash bucket to 0xA filters
+ * Send interrupt when 64 (0x4 * 16) filters are left
+ */
+ fdirctrl |= TXGBE_RDB_FDIR_CTL_PERFECT_MATCH |
+ (TXGBE_RDB_FDIR_DROP_QUEUE <<
+ TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT) |
+ (0xF << TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT) |
+ (0xA << TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT) |
+ (4 << TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT);
+
+ /* write hashes and fdirctrl register, poll for completion */
+ txgbe_fdir_enable(hw, fdirctrl);
+
+ if (hw->revision_id == TXGBE_SP_MPW) {
+ if (((struct txgbe_adapter *)hw->back)->num_rx_queues >
+ TXGBE_RDB_FDIR_DROP_QUEUE)
+ /* errata 1: disable RSC of drop ring */
+ wr32m(hw,
+ TXGBE_PX_RR_CFG(TXGBE_RDB_FDIR_DROP_QUEUE),
+ TXGBE_PX_RR_CFG_RSC, ~TXGBE_PX_RR_CFG_RSC);
+ }
+ return 0;
+}
+
+/*
+ * These defines allow us to quickly generate all of the necessary instructions
+ * in the function below by simply calling out TXGBE_COMPUTE_SIG_HASH_ITERATION
+ * for values 0 through 15
+ */
+#define TXGBE_ATR_COMMON_HASH_KEY \
+ (TXGBE_ATR_BUCKET_HASH_KEY & TXGBE_ATR_SIGNATURE_HASH_KEY)
+#define TXGBE_COMPUTE_SIG_HASH_ITERATION(_n) \
+do { \
+ u32 n = (_n); \
+ if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << n)) \
+ common_hash ^= lo_hash_dword >> n; \
+ else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+ bucket_hash ^= lo_hash_dword >> n; \
+ else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << n)) \
+ sig_hash ^= lo_hash_dword << (16 - n); \
+ if (TXGBE_ATR_COMMON_HASH_KEY & (0x01 << (n + 16))) \
+ common_hash ^= hi_hash_dword >> n; \
+ else if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+ bucket_hash ^= hi_hash_dword >> n; \
+ else if (TXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << (n + 16))) \
+ sig_hash ^= hi_hash_dword << (16 - n); \
+} while (0)
+
+/**
+ * txgbe_atr_compute_sig_hash - Compute the signature hash
+ * @stream: input bitstream to compute the hash on
+ *
+ * This function is almost identical to the function above but contains
+ * several optimizations such as unwinding all of the loops, letting the
+ * compiler work out all of the conditional ifs since the keys are static
+ * defines, and computing two keys at once since the hashed dword stream
+ * will be the same for both keys.
+ **/
+u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input,
+ union txgbe_atr_hash_dword common)
+{
+ u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
+ u32 sig_hash = 0, bucket_hash = 0, common_hash = 0;
+
+ /* record the flow_vm_vlan bits as they are a key part to the hash */
+ flow_vm_vlan = TXGBE_NTOHL(input.dword);
+
+ /* generate common hash dword */
+ hi_hash_dword = TXGBE_NTOHL(common.dword);
+
+ /* low dword is word swapped version of common */
+ lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
+
+ /* apply flow ID/VM pool/VLAN ID bits to hash words */
+ hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
+
+ /* Process bits 0 and 16 */
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(0);
+
+ /*
+ * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
+ * delay this because bit 0 of the stream should not be processed
+ * so we do not add the VLAN until after bit 0 was processed
+ */
+ lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
+
+ /* Process remaining 30 bit of the key */
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(1);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(2);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(3);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(4);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(5);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(6);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(7);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(8);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(9);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(10);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(11);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(12);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(13);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(14);
+ TXGBE_COMPUTE_SIG_HASH_ITERATION(15);
+
+ /* combine common_hash result with signature and bucket hashes */
+ bucket_hash ^= common_hash;
+ bucket_hash &= TXGBE_ATR_HASH_MASK;
+
+ sig_hash ^= common_hash << 16;
+ sig_hash &= TXGBE_ATR_HASH_MASK << 16;
+
+ /* return completed signature hash */
+ return sig_hash ^ bucket_hash;
+}
+
+/**
+ * txgbe_atr_add_signature_filter - Adds a signature hash filter
+ * @hw: pointer to hardware structure
+ * @input: unique input dword
+ * @common: compressed common input dword
+ * @queue: queue index to direct traffic to
+ **/
+s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw,
+ union txgbe_atr_hash_dword input,
+ union txgbe_atr_hash_dword common,
+ u8 queue)
+{
+ u32 fdirhashcmd = 0;
+ u8 flow_type;
+ u32 fdircmd;
+ s32 err;
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Get the flow_type in order to program FDIRCMD properly
+ * lowest 2 bits are FDIRCMD.L4TYPE, third lowest bit is FDIRCMD.IPV6
+ * fifth is FDIRCMD.TUNNEL_FILTER
+ */
+ flow_type = input.formatted.flow_type;
+ switch (flow_type) {
+ case TXGBE_ATR_FLOW_TYPE_TCPV4:
+ case TXGBE_ATR_FLOW_TYPE_UDPV4:
+ case TXGBE_ATR_FLOW_TYPE_SCTPV4:
+ case TXGBE_ATR_FLOW_TYPE_TCPV6:
+ case TXGBE_ATR_FLOW_TYPE_UDPV6:
+ case TXGBE_ATR_FLOW_TYPE_SCTPV6:
+ break;
+ default:
+ DEBUGOUT(" Error on flow type input\n");
+ return TXGBE_ERR_CONFIG;
+ }
+
+ /* configure FDIRCMD register */
+ fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW |
+ TXGBE_RDB_FDIR_CMD_FILTER_UPDATE |
+ TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN;
+ fdircmd |= (u32)flow_type << TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT;
+ fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT;
+
+ fdirhashcmd |= txgbe_atr_compute_sig_hash(input, common);
+ fdirhashcmd |= 0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT;
+ wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhashcmd);
+
+ wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd);
+
+ err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+ if (err) {
+ DEBUGOUT("Flow Director command did not complete!\n");
+ return err;
+ }
+
+ DEBUGOUT2("Tx Queue=%x hash=%x\n", queue, (u32)fdirhashcmd);
+
+ return 0;
+}
+
+#define TXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \
+do { \
+ u32 n = (_n); \
+ if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+ bucket_hash ^= lo_hash_dword >> n; \
+ if (TXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+ bucket_hash ^= hi_hash_dword >> n; \
+} while (0)
+
+/**
+ * txgbe_atr_compute_perfect_hash - Compute the perfect filter hash
+ * @atr_input: input bitstream to compute the hash on
+ * @input_mask: mask for the input bitstream
+ *
+ * This function serves two main purposes. First it applies the input_mask
+ * to the atr_input resulting in a cleaned up atr_input data stream.
+ * Secondly it computes the hash and stores it in the bkt_hash field at
+ * the end of the input byte stream. This way it will be available for
+ * future use without needing to recompute the hash.
+ **/
+void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input,
+ union txgbe_atr_input *input_mask)
+{
+ u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan;
+ u32 bucket_hash = 0;
+ u32 hi_dword = 0;
+ u32 i = 0;
+
+ /* Apply masks to input data */
+ for (i = 0; i < 11; i++)
+ input->dword_stream[i] &= input_mask->dword_stream[i];
+
+ /* record the flow_vm_vlan bits as they are a key part to the hash */
+ flow_vm_vlan = TXGBE_NTOHL(input->dword_stream[0]);
+
+ /* generate common hash dword */
+ for (i = 1; i <= 10; i++)
+ hi_dword ^= input->dword_stream[i];
+ hi_hash_dword = TXGBE_NTOHL(hi_dword);
+
+ /* low dword is word swapped version of common */
+ lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16);
+
+ /* apply flow ID/VM pool/VLAN ID bits to hash words */
+ hi_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan >> 16);
+
+ /* Process bits 0 and 16 */
+ TXGBE_COMPUTE_BKT_HASH_ITERATION(0);
+
+ /*
+ * apply flow ID/VM pool/VLAN ID bits to lo hash dword, we had to
+ * delay this because bit 0 of the stream should not be processed
+ * so we do not add the VLAN until after bit 0 was processed
+ */
+ lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16);
+
+ /* Process remaining 30 bit of the key */
+ for (i = 1; i <= 15; i++)
+ TXGBE_COMPUTE_BKT_HASH_ITERATION(i);
+
+ /*
+ * Limit hash to 13 bits since max bucket count is 8K.
+ * Store result at the end of the input stream.
+ */
+ input->formatted.bkt_hash = bucket_hash & 0x1FFF;
+}
+
+/**
+ * txgbe_get_fdirtcpm - generate a TCP port from atr_input_masks
+ * @input_mask: mask to be bit swapped
+ *
+ * The source and destination port masks for flow director are bit swapped
+ * in that bit 15 effects bit 0, 14 effects 1, 13, 2 etc. In order to
+ * generate a correctly swapped value we need to bit swap the mask and that
+ * is what is accomplished by this function.
+ **/
+STATIC u32 txgbe_get_fdirtcpm(union txgbe_atr_input *input_mask)
+{
+ u32 mask = TXGBE_NTOHS(input_mask->formatted.dst_port);
+ mask <<= TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT;
+ mask |= TXGBE_NTOHS(input_mask->formatted.src_port);
+ mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
+ mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
+ mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
+ return ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8);
+}
+
+/*
+ * These two macros are meant to address the fact that we have registers
+ * that are either all or in part big-endian. As a result on big-endian
+ * systems we will end up byte swapping the value to little-endian before
+ * it is byte swapped again and written to the hardware in the original
+ * big-endian format.
+ */
+#define TXGBE_STORE_AS_BE32(_value) \
+ (((u32)(_value) >> 24) | (((u32)(_value) & 0x00FF0000) >> 8) | \
+ (((u32)(_value) & 0x0000FF00) << 8) | ((u32)(_value) << 24))
+
+#define TXGBE_WRITE_REG_BE32(a, reg, value) \
+ wr32((a), (reg), TXGBE_STORE_AS_BE32(TXGBE_NTOHL(value)))
+
+#define TXGBE_STORE_AS_BE16(_value) \
+ TXGBE_NTOHS(((u16)(_value) >> 8) | ((u16)(_value) << 8))
+
+s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw,
+ union txgbe_atr_input *input_mask,
+ bool cloud_mode)
+{
+ /* mask IPv6 since it is currently not supported */
+ u32 fdirm = 0;
+ u32 fdirtcpm;
+ u32 flex = 0;
+ int i, j;
+ struct txgbe_adapter *adapter = (struct txgbe_adapter *)hw->back;
+
+ UNREFERENCED_PARAMETER(cloud_mode);
+ UNREFERENCED_PARAMETER(adapter);
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Program the relevant mask registers. If src/dst_port or src/dst_addr
+ * are zero, then assume a full mask for that field. Also assume that
+ * a VLAN of 0 is unspecified, so mask that out as well. L4type
+ * cannot be masked out in this implementation.
+ *
+ * This also assumes IPv4 only. IPv6 masking isn't supported at this
+ * point in time.
+ */
+
+ /* verify bucket hash is cleared on hash generation */
+ if (input_mask->formatted.bkt_hash)
+ DEBUGOUT(" bucket hash should always be 0 in mask\n");
+
+ /* Program FDIRM and verify partial masks */
+ switch (input_mask->formatted.vm_pool & 0x7F) {
+ case 0x0:
+ fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_POOL;
+ case 0x7F:
+ break;
+ default:
+ DEBUGOUT(" Error on vm pool mask\n");
+ return TXGBE_ERR_CONFIG;
+ }
+
+ switch (input_mask->formatted.flow_type & TXGBE_ATR_L4TYPE_MASK) {
+ case 0x0:
+ fdirm |= TXGBE_RDB_FDIR_OTHER_MSK_L4P;
+ if (input_mask->formatted.dst_port ||
+ input_mask->formatted.src_port) {
+ DEBUGOUT(" Error on src/dst port mask\n");
+ return TXGBE_ERR_CONFIG;
+ }
+ case TXGBE_ATR_L4TYPE_MASK:
+ break;
+ default:
+ DEBUGOUT(" Error on flow type mask\n");
+ return TXGBE_ERR_CONFIG;
+ }
+
+ /* Now mask VM pool and destination IPv6 - bits 5 and 2 */
+ wr32(hw, TXGBE_RDB_FDIR_OTHER_MSK, fdirm);
+
+ i = VMDQ_P(0) / 4;
+ j = VMDQ_P(0) % 4;
+ flex = rd32m(hw, TXGBE_RDB_FDIR_FLEX_CFG(i),
+ ~((TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK |
+ TXGBE_RDB_FDIR_FLEX_CFG_MSK |
+ TXGBE_RDB_FDIR_FLEX_CFG_OFST) <<
+ (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j)));
+ flex |= (TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC |
+ 0x6 << TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT) <<
+ (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j);
+
+ switch (input_mask->formatted.flex_bytes & 0xFFFF) {
+ case 0x0000:
+ /* Mask Flex Bytes, fall through */
+ flex |= TXGBE_RDB_FDIR_FLEX_CFG_MSK <<
+ (TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT * j);
+ case 0xFFFF:
+ break;
+ default:
+ DEBUGOUT(" Error on flexible byte mask\n");
+ return TXGBE_ERR_CONFIG;
+ }
+ wr32(hw, TXGBE_RDB_FDIR_FLEX_CFG(i), flex);
+
+ /* store the TCP/UDP port masks, bit reversed from port
+ * layout */
+ fdirtcpm = txgbe_get_fdirtcpm(input_mask);
+
+ /* write both the same so that UDP and TCP use the same mask */
+ wr32(hw, TXGBE_RDB_FDIR_TCP_MSK, ~fdirtcpm);
+ wr32(hw, TXGBE_RDB_FDIR_UDP_MSK, ~fdirtcpm);
+ wr32(hw, TXGBE_RDB_FDIR_SCTP_MSK, ~fdirtcpm);
+
+ /* store source and destination IP masks (little-enian) */
+ wr32(hw, TXGBE_RDB_FDIR_SA4_MSK,
+ TXGBE_NTOHL(~input_mask->formatted.src_ip[0]));
+ wr32(hw, TXGBE_RDB_FDIR_DA4_MSK,
+ TXGBE_NTOHL(~input_mask->formatted.dst_ip[0]));
+ return 0;
+}
+
+s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
+ union txgbe_atr_input *input,
+ u16 soft_id, u8 queue,
+ bool cloud_mode)
+{
+ u32 fdirport, fdirvlan, fdirhash, fdircmd;
+ s32 err;
+
+ DEBUGFUNC("\n");
+ if (!cloud_mode) {
+ /* currently IPv6 is not supported, must be programmed with 0 */
+ wr32(hw, TXGBE_RDB_FDIR_IP6(2),
+ TXGBE_NTOHL(input->formatted.src_ip[0]));
+ wr32(hw, TXGBE_RDB_FDIR_IP6(1),
+ TXGBE_NTOHL(input->formatted.src_ip[1]));
+ wr32(hw, TXGBE_RDB_FDIR_IP6(0),
+ TXGBE_NTOHL(input->formatted.src_ip[2]));
+
+ /* record the source address (little-endian) */
+ wr32(hw, TXGBE_RDB_FDIR_SA,
+ TXGBE_NTOHL(input->formatted.src_ip[0]));
+
+ /* record the first 32 bits of the destination address
+ * (little-endian) */
+ wr32(hw, TXGBE_RDB_FDIR_DA,
+ TXGBE_NTOHL(input->formatted.dst_ip[0]));
+
+ /* record source and destination port (little-endian)*/
+ fdirport = TXGBE_NTOHS(input->formatted.dst_port);
+ fdirport <<= TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT;
+ fdirport |= TXGBE_NTOHS(input->formatted.src_port);
+ wr32(hw, TXGBE_RDB_FDIR_PORT, fdirport);
+ }
+
+ /* record packet type and flex_bytes(little-endian) */
+ fdirvlan = TXGBE_NTOHS(input->formatted.flex_bytes);
+ fdirvlan <<= TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT;
+
+ fdirvlan |= TXGBE_NTOHS(input->formatted.vlan_id);
+ wr32(hw, TXGBE_RDB_FDIR_FLEX, fdirvlan);
+
+
+ /* configure FDIRHASH register */
+ fdirhash = input->formatted.bkt_hash |
+ 0x1 << TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT;
+ fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT;
+ wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+
+ /*
+ * flush all previous writes to make certain registers are
+ * programmed prior to issuing the command
+ */
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* configure FDIRCMD register */
+ fdircmd = TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW |
+ TXGBE_RDB_FDIR_CMD_FILTER_UPDATE |
+ TXGBE_RDB_FDIR_CMD_LAST | TXGBE_RDB_FDIR_CMD_QUEUE_EN;
+ if (queue == TXGBE_RDB_FDIR_DROP_QUEUE)
+ fdircmd |= TXGBE_RDB_FDIR_CMD_DROP;
+ fdircmd |= input->formatted.flow_type <<
+ TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT;
+ fdircmd |= (u32)queue << TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT;
+ fdircmd |= (u32)input->formatted.vm_pool <<
+ TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT;
+
+ wr32(hw, TXGBE_RDB_FDIR_CMD, fdircmd);
+ err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+ if (err) {
+ DEBUGOUT("Flow Director command did not complete!\n");
+ return err;
+ }
+
+ return 0;
+}
+
+s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw,
+ union txgbe_atr_input *input,
+ u16 soft_id)
+{
+ u32 fdirhash;
+ u32 fdircmd;
+ s32 err;
+
+ /* configure FDIRHASH register */
+ fdirhash = input->formatted.bkt_hash;
+ fdirhash |= soft_id << TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT;
+ wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+
+ /* flush hash to HW */
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* Query if filter is present */
+ wr32(hw, TXGBE_RDB_FDIR_CMD,
+ TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT);
+
+ err = txgbe_fdir_check_cmd_complete(hw, &fdircmd);
+ if (err) {
+ DEBUGOUT("Flow Director command did not complete!\n");
+ return err;
+ }
+
+ /* if filter exists in hardware then remove it */
+ if (fdircmd & TXGBE_RDB_FDIR_CMD_FILTER_VALID) {
+ wr32(hw, TXGBE_RDB_FDIR_HASH, fdirhash);
+ TXGBE_WRITE_FLUSH(hw);
+ wr32(hw, TXGBE_RDB_FDIR_CMD,
+ TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW);
+ }
+
+ return 0;
+}
+
+
+/**
+ * txgbe_start_hw - Prepare hardware for Tx/Rx
+ * @hw: pointer to hardware structure
+ *
+ * Starts the hardware using the generic start_hw function
+ * and the generation start_hw function.
+ * Then performs revision-specific operations, if any.
+ **/
+s32 txgbe_start_hw(struct txgbe_hw *hw)
+{
+ int ret_val = 0;
+ u32 i;
+
+ DEBUGFUNC("\n");
+
+ /* Set the media type */
+ hw->phy.media_type = TCALL(hw, mac.ops.get_media_type);
+
+ /* PHY ops initialization must be done in reset_hw() */
+
+ /* Clear the VLAN filter table */
+ TCALL(hw, mac.ops.clear_vfta);
+
+ /* Clear statistics registers */
+ TCALL(hw, mac.ops.clear_hw_cntrs);
+
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* Setup flow control */
+ ret_val = TCALL(hw, mac.ops.setup_fc);
+
+ /* Clear the rate limiters */
+ for (i = 0; i < hw->mac.max_tx_queues; i++) {
+ wr32(hw, TXGBE_TDM_RP_IDX, i);
+ wr32(hw, TXGBE_TDM_RP_RATE, 0);
+ }
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* Clear adapter stopped flag */
+ hw->adapter_stopped = false;
+
+ /* We need to run link autotry after the driver loads */
+ hw->mac.autotry_restart = true;
+
+ return ret_val;
+}
+
+/**
+ * txgbe_identify_phy - Get physical layer module
+ * @hw: pointer to hardware structure
+ *
+ * Determines the physical layer module found on the current adapter.
+ * If PHY already detected, maintains current PHY type in hw struct,
+ * otherwise executes the PHY detection routine.
+ **/
+s32 txgbe_identify_phy(struct txgbe_hw *hw)
+{
+ /* Detect PHY if not unknown - returns success if already detected. */
+ s32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+ enum txgbe_media_type media_type;
+
+ DEBUGFUNC("\n");
+
+ if (!hw->phy.phy_semaphore_mask) {
+ hw->phy.phy_semaphore_mask = TXGBE_MNG_SWFW_SYNC_SW_PHY;
+ }
+
+ media_type = TCALL(hw, mac.ops.get_media_type);
+ if (media_type == txgbe_media_type_copper) {
+ status = txgbe_init_external_phy(hw);
+ if (status != 0) {
+ return status;
+ }
+ txgbe_get_phy_id(hw);
+ hw->phy.type = txgbe_get_phy_type_from_id(hw);
+ status = 0;
+ } else if (media_type == txgbe_media_type_fiber) {
+ status = txgbe_identify_module(hw);
+ } else {
+ hw->phy.type = txgbe_phy_none;
+ status = 0;
+ }
+
+ /* Return error if SFP module has been detected but is not supported */
+ if (hw->phy.type == txgbe_phy_sfp_unsupported)
+ return TXGBE_ERR_SFP_NOT_SUPPORTED;
+
+ return status;
+}
+
+
+/**
+ * txgbe_enable_rx_dma - Enable the Rx DMA unit on sapphire
+ * @hw: pointer to hardware structure
+ * @regval: register value to write to RXCTRL
+ *
+ * Enables the Rx DMA unit for sapphire
+ **/
+s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval)
+{
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Workaround for sapphire silicon errata when enabling the Rx datapath.
+ * If traffic is incoming before we enable the Rx unit, it could hang
+ * the Rx DMA unit. Therefore, make sure the security engine is
+ * completely disabled prior to enabling the Rx unit.
+ */
+
+ TCALL(hw, mac.ops.disable_sec_rx_path);
+
+ if (regval & TXGBE_RDB_PB_CTL_RXEN)
+ TCALL(hw, mac.ops.enable_rx);
+ else
+ TCALL(hw, mac.ops.disable_rx);
+
+ TCALL(hw, mac.ops.enable_sec_rx_path);
+
+ return 0;
+}
+
+/**
+ * txgbe_init_flash_params - Initialize flash params
+ * @hw: pointer to hardware structure
+ *
+ * Initializes the EEPROM parameters txgbe_eeprom_info within the
+ * txgbe_hw struct in order to set up EEPROM access.
+ **/
+s32 txgbe_init_flash_params(struct txgbe_hw *hw)
+{
+ struct txgbe_flash_info *flash = &hw->flash;
+ u32 eec;
+
+ DEBUGFUNC("\n");
+
+ eec = 0x1000000;
+ flash->semaphore_delay = 10;
+ flash->dword_size = (eec >> 2);
+ flash->address_bits = 24;
+ DEBUGOUT3("FLASH params: size = %d, address bits: %d\n",
+ flash->dword_size,
+ flash->address_bits);
+
+ return 0;
+}
+
+/**
+ * txgbe_read_flash_buffer - Read FLASH dword(s) using
+ * fastest available method
+ *
+ * @hw: pointer to hardware structure
+ * @offset: offset of dword in EEPROM to read
+ * @dwords: number of dwords
+ * @data: dword(s) read from the EEPROM
+ *
+ * Retrieves 32 bit dword(s) read from EEPROM
+ **/
+s32 txgbe_read_flash_buffer(struct txgbe_hw *hw, u32 offset,
+ u32 dwords, u32 *data)
+{
+ s32 status = 0;
+ u32 i;
+
+ DEBUGFUNC("\n");
+
+ TCALL(hw, eeprom.ops.init_params);
+
+ if (!dwords || offset + dwords >= hw->flash.dword_size) {
+ status = TXGBE_ERR_INVALID_ARGUMENT;
+ ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, "Invalid FLASH arguments");
+ return status;
+ }
+
+ for (i = 0; i < dwords; i++) {
+ wr32(hw, TXGBE_SPI_DATA, data[i]);
+ wr32(hw, TXGBE_SPI_CMD,
+ TXGBE_SPI_CMD_ADDR(offset + i) |
+ TXGBE_SPI_CMD_CMD(0x0));
+
+ status = po32m(hw, TXGBE_SPI_STATUS,
+ TXGBE_SPI_STATUS_OPDONE, TXGBE_SPI_STATUS_OPDONE,
+ TXGBE_SPI_TIMEOUT, 0);
+ if (status) {
+ DEBUGOUT("FLASH read timed out\n");
+ break;
+ }
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_write_flash_buffer - Write FLASH dword(s) using
+ * fastest available method
+ *
+ * @hw: pointer to hardware structure
+ * @offset: offset of dword in EEPROM to write
+ * @dwords: number of dwords
+ * @data: dword(s) write from to EEPROM
+ *
+ **/
+s32 txgbe_write_flash_buffer(struct txgbe_hw *hw, u32 offset,
+ u32 dwords, u32 *data)
+{
+ s32 status = 0;
+ u32 i;
+
+ DEBUGFUNC("\n");
+
+ TCALL(hw, eeprom.ops.init_params);
+
+ if (!dwords || offset + dwords >= hw->flash.dword_size) {
+ status = TXGBE_ERR_INVALID_ARGUMENT;
+ ERROR_REPORT1(TXGBE_ERROR_ARGUMENT, "Invalid FLASH arguments");
+ return status;
+ }
+
+ for (i = 0; i < dwords; i++) {
+ wr32(hw, TXGBE_SPI_CMD,
+ TXGBE_SPI_CMD_ADDR(offset + i) |
+ TXGBE_SPI_CMD_CMD(0x1));
+
+ status = po32m(hw, TXGBE_SPI_STATUS,
+ TXGBE_SPI_STATUS_OPDONE, TXGBE_SPI_STATUS_OPDONE,
+ TXGBE_SPI_TIMEOUT, 0);
+ if (status != 0) {
+ DEBUGOUT("FLASH write timed out\n");
+ break;
+ }
+ data[i] = rd32(hw, TXGBE_SPI_DATA);
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_init_eeprom_params - Initialize EEPROM params
+ * @hw: pointer to hardware structure
+ *
+ * Initializes the EEPROM parameters txgbe_eeprom_info within the
+ * txgbe_hw struct in order to set up EEPROM access.
+ **/
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw)
+{
+ struct txgbe_eeprom_info *eeprom = &hw->eeprom;
+ u16 eeprom_size;
+ s32 status = 0;
+ u16 data;
+
+ DEBUGFUNC("\n");
+
+ if (eeprom->type == txgbe_eeprom_uninitialized) {
+ eeprom->semaphore_delay = 10;
+ eeprom->type = txgbe_eeprom_none;
+
+ if (!(rd32(hw, TXGBE_SPI_STATUS) &
+ TXGBE_SPI_STATUS_FLASH_BYPASS)) {
+ eeprom->type = txgbe_flash;
+
+ eeprom_size = 4096;
+ eeprom->word_size = eeprom_size >> 1;
+
+ DEBUGOUT2("Eeprom params: type = %d, size = %d\n",
+ eeprom->type, eeprom->word_size);
+ }
+ }
+
+ status = TCALL(hw, eeprom.ops.read, TXGBE_SW_REGION_PTR,
+ &data);
+ if (status) {
+ DEBUGOUT("NVM Read Error\n");
+ return status;
+ }
+ eeprom->sw_region_offset = data >> 1;
+
+ return status;
+}
+
+/**
+ * txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd
+ * assuming that the semaphore is already obtained.
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @data: word read from the EEPROM
+ *
+ * Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif_data(struct txgbe_hw *hw, u16 offset,
+ u16 *data)
+{
+ s32 status;
+ struct txgbe_hic_read_shadow_ram buffer;
+
+ DEBUGFUNC("\n");
+ buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+ buffer.hdr.req.buf_lenh = 0;
+ buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+ buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+ /* convert offset from words to bytes */
+ buffer.address = TXGBE_CPU_TO_BE32(offset * 2);
+ /* one word */
+ buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16));
+
+ status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+ sizeof(buffer),
+ TXGBE_HI_COMMAND_TIMEOUT, false);
+
+ if (status)
+ return status;
+ if (txgbe_check_mng_access(hw))
+ *data = (u16)rd32a(hw, TXGBE_MNG_MBOX,
+ FW_NVM_DATA_OFFSET);
+ else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ return status;
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_read_ee_hostif - Read EEPROM word using a host interface cmd
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @data: word read from the EEPROM
+ *
+ * Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset,
+ u16 *data)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ if (TCALL(hw, mac.ops.acquire_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) {
+ status = txgbe_read_ee_hostif_data(hw, offset, data);
+ TCALL(hw, mac.ops.release_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+ } else {
+ status = TXGBE_ERR_SWFW_SYNC;
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_read_ee_hostif_buffer- Read EEPROM word(s) using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @words: number of words
+ * @data: word(s) read from the EEPROM
+ *
+ * Reads a 16 bit word(s) from the EEPROM using the hostif.
+ **/
+s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
+ u16 offset, u16 words, u16 *data)
+{
+ struct txgbe_hic_read_shadow_ram buffer;
+ u32 current_word = 0;
+ u16 words_to_read;
+ s32 status;
+ u32 i;
+ u32 value = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Take semaphore for the entire operation. */
+ status = TCALL(hw, mac.ops.acquire_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+ if (status) {
+ DEBUGOUT("EEPROM read buffer - semaphore failed\n");
+ return status;
+ }
+ while (words) {
+ if (words > FW_MAX_READ_BUFFER_SIZE / 2)
+ words_to_read = FW_MAX_READ_BUFFER_SIZE / 2;
+ else
+ words_to_read = words;
+
+ buffer.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+ buffer.hdr.req.buf_lenh = 0;
+ buffer.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+ buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+ /* convert offset from words to bytes */
+ buffer.address = TXGBE_CPU_TO_BE32((offset + current_word) * 2);
+ buffer.length = TXGBE_CPU_TO_BE16(words_to_read * 2);
+
+ status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+ sizeof(buffer),
+ TXGBE_HI_COMMAND_TIMEOUT,
+ false);
+
+ if (status) {
+ DEBUGOUT("Host interface command failed\n");
+ goto out;
+ }
+
+ for (i = 0; i < words_to_read; i++) {
+ u32 reg = TXGBE_MNG_MBOX + (FW_NVM_DATA_OFFSET << 2) +
+ 2 * i;
+ if (txgbe_check_mng_access(hw))
+ value = rd32(hw, reg);
+ else {
+ status = TXGBE_ERR_MNG_ACCESS_FAILED;
+ return status;
+ }
+ data[current_word] = (u16)(value & 0xffff);
+ current_word++;
+ i++;
+ if (i < words_to_read) {
+ value >>= 16;
+ data[current_word] = (u16)(value & 0xffff);
+ current_word++;
+ }
+ }
+ words -= words_to_read;
+ }
+
+out:
+ TCALL(hw, mac.ops.release_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+ return status;
+}
+
+/**
+ * txgbe_write_ee_hostif - Write EEPROM word using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to write
+ * @data: word write to the EEPROM
+ *
+ * Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif_data(struct txgbe_hw *hw, u16 offset,
+ u16 data)
+{
+ s32 status;
+ struct txgbe_hic_write_shadow_ram buffer;
+
+ DEBUGFUNC("\n");
+
+ buffer.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD;
+ buffer.hdr.req.buf_lenh = 0;
+ buffer.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN;
+ buffer.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+ /* one word */
+ buffer.length = TXGBE_CPU_TO_BE16(sizeof(u16));
+ buffer.data = data;
+ buffer.address = TXGBE_CPU_TO_BE32(offset * 2);
+
+ status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+ sizeof(buffer),
+ TXGBE_HI_COMMAND_TIMEOUT, false);
+
+ return status;
+}
+
+/**
+ * txgbe_write_ee_hostif - Write EEPROM word using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to write
+ * @data: word write to the EEPROM
+ *
+ * Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset,
+ u16 data)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ if (TCALL(hw, mac.ops.acquire_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH) == 0) {
+ status = txgbe_write_ee_hostif_data(hw, offset, data);
+ TCALL(hw, mac.ops.release_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+ } else {
+ DEBUGOUT("write ee hostif failed to get semaphore");
+ status = TXGBE_ERR_SWFW_SYNC;
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_write_ee_hostif_buffer - Write EEPROM word(s) using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to write
+ * @words: number of words
+ * @data: word(s) write to the EEPROM
+ *
+ * Write a 16 bit word(s) to the EEPROM using the hostif.
+ **/
+s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw,
+ u16 offset, u16 words, u16 *data)
+{
+ s32 status = 0;
+ u16 i = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Take semaphore for the entire operation. */
+ status = TCALL(hw, mac.ops.acquire_swfw_sync,
+ TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+ if (status != 0) {
+ DEBUGOUT("EEPROM write buffer - semaphore failed\n");
+ goto out;
+ }
+
+ for (i = 0; i < words; i++) {
+ status = txgbe_write_ee_hostif_data(hw, offset + i,
+ data[i]);
+
+ if (status != 0) {
+ DEBUGOUT("Eeprom buffered write failed\n");
+ break;
+ }
+ }
+
+ TCALL(hw, mac.ops.release_swfw_sync, TXGBE_MNG_SWFW_SYNC_SW_FLASH);
+out:
+
+ return status;
+}
+
+
+
+/**
+ * txgbe_calc_eeprom_checksum - Calculates and returns the checksum
+ * @hw: pointer to hardware structure
+ *
+ * Returns a negative error code on error, or the 16-bit checksum
+ **/
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw)
+{
+ u16 *buffer = NULL;
+ u32 buffer_size = 0;
+
+ u16 *eeprom_ptrs = NULL;
+ u16 *local_buffer;
+ s32 status;
+ u16 checksum = 0;
+ u16 i;
+
+ DEBUGFUNC("\n");
+
+ TCALL(hw, eeprom.ops.init_params);
+
+ if (!buffer) {
+ eeprom_ptrs = (u16 *)vmalloc(TXGBE_EEPROM_LAST_WORD *
+ sizeof(u16));
+ if (!eeprom_ptrs)
+ return TXGBE_ERR_NO_SPACE;
+ /* Read pointer area */
+ status = txgbe_read_ee_hostif_buffer(hw, 0,
+ TXGBE_EEPROM_LAST_WORD,
+ eeprom_ptrs);
+ if (status) {
+ DEBUGOUT("Failed to read EEPROM image\n");
+ return status;
+ }
+ local_buffer = eeprom_ptrs;
+ } else {
+ if (buffer_size < TXGBE_EEPROM_LAST_WORD)
+ return TXGBE_ERR_PARAM;
+ local_buffer = buffer;
+ }
+
+ for (i = 0; i < TXGBE_EEPROM_LAST_WORD; i++)
+ if (i != hw->eeprom.sw_region_offset + TXGBE_EEPROM_CHECKSUM)
+ checksum += local_buffer[i];
+
+ checksum = (u16)TXGBE_EEPROM_SUM - checksum;
+ if (eeprom_ptrs)
+ vfree(eeprom_ptrs);
+
+ return (s32)checksum;
+}
+
+/**
+ * txgbe_update_eeprom_checksum - Updates the EEPROM checksum and flash
+ * @hw: pointer to hardware structure
+ *
+ * After writing EEPROM to shadow RAM using EEWR register, software calculates
+ * checksum and updates the EEPROM and instructs the hardware to update
+ * the flash.
+ **/
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw)
+{
+ s32 status;
+ u16 checksum = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Read the first word from the EEPROM. If this times out or fails, do
+ * not continue or we could be in for a very long wait while every
+ * EEPROM read fails
+ */
+ status = txgbe_read_ee_hostif(hw, 0, &checksum);
+ if (status) {
+ DEBUGOUT("EEPROM read failed\n");
+ return status;
+ }
+
+ status = txgbe_calc_eeprom_checksum(hw);
+ if (status < 0)
+ return status;
+
+ checksum = (u16)(status & 0xffff);
+
+ status = txgbe_write_ee_hostif(hw, TXGBE_EEPROM_CHECKSUM,
+ checksum);
+ if (status)
+ return status;
+
+ return status;
+}
+
+/**
+ * txgbe_validate_eeprom_checksum - Validate EEPROM checksum
+ * @hw: pointer to hardware structure
+ * @checksum_val: calculated checksum
+ *
+ * Performs checksum calculation and validates the EEPROM checksum. If the
+ * caller does not need checksum_val, the value can be NULL.
+ **/
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
+ u16 *checksum_val)
+{
+ s32 status;
+ u16 checksum;
+ u16 read_checksum = 0;
+
+ DEBUGFUNC("\n");
+
+ /* Read the first word from the EEPROM. If this times out or fails, do
+ * not continue or we could be in for a very long wait while every
+ * EEPROM read fails
+ */
+ status = TCALL(hw, eeprom.ops.read, 0, &checksum);
+ if (status) {
+ DEBUGOUT("EEPROM read failed\n");
+ return status;
+ }
+
+ status = TCALL(hw, eeprom.ops.calc_checksum);
+ if (status < 0)
+ return status;
+
+ checksum = (u16)(status & 0xffff);
+
+ status = txgbe_read_ee_hostif(hw, hw->eeprom.sw_region_offset +
+ TXGBE_EEPROM_CHECKSUM,
+ &read_checksum);
+ if (status)
+ return status;
+
+ /* Verify read checksum from EEPROM is the same as
+ * calculated checksum
+ */
+ if (read_checksum != checksum) {
+ status = TXGBE_ERR_EEPROM_CHECKSUM;
+ ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE,
+ "Invalid EEPROM checksum\n");
+ }
+
+ /* If the user cares, return the calculated checksum */
+ if (checksum_val)
+ *checksum_val = checksum;
+
+ return status;
+}
+
+/**
+ * txgbe_update_flash - Instruct HW to copy EEPROM to Flash device
+ * @hw: pointer to hardware structure
+ *
+ * Issue a shadow RAM dump to FW to copy EEPROM from shadow RAM to the flash.
+ **/
+s32 txgbe_update_flash(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+ union txgbe_hic_hdr2 buffer;
+
+ DEBUGFUNC("\n");
+
+ buffer.req.cmd = FW_SHADOW_RAM_DUMP_CMD;
+ buffer.req.buf_lenh = 0;
+ buffer.req.buf_lenl = FW_SHADOW_RAM_DUMP_LEN;
+ buffer.req.checksum = FW_DEFAULT_CHECKSUM;
+
+ status = txgbe_host_interface_command(hw, (u32 *)&buffer,
+ sizeof(buffer),
+ TXGBE_HI_COMMAND_TIMEOUT, false);
+
+ return status;
+}
+
+
+/**
+ * txgbe_check_mac_link - Determine link and speed status
+ * @hw: pointer to hardware structure
+ * @speed: pointer to link speed
+ * @link_up: true when link is up
+ * @link_up_wait_to_complete: bool used to wait for link up or not
+ *
+ * Reads the links register to determine if link is up and the current speed
+ **/
+s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
+ bool *link_up, bool link_up_wait_to_complete)
+{
+ u32 links_reg = 0;
+ u32 i;
+ u16 value;
+
+ DEBUGFUNC("\n");
+
+ if (link_up_wait_to_complete) {
+ for (i = 0; i < TXGBE_LINK_UP_TIME; i++) {
+ if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper &&
+ ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) {
+ /* read ext phy link status */
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value);
+ if (value & 0x400) {
+ *link_up = true;
+ } else {
+ *link_up = false;
+ }
+ } else {
+ *link_up = true;
+ }
+ if (*link_up) {
+ links_reg = rd32(hw,
+ TXGBE_CFG_PORT_ST);
+ if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+ *link_up = true;
+ break;
+ } else {
+ *link_up = false;
+ }
+ }
+ msleep(100);
+ }
+ } else {
+ if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper &&
+ ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) {
+ /* read ext phy link status */
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8008, &value);
+ if (value & 0x400) {
+ *link_up = true;
+ } else {
+ *link_up = false;
+ }
+ } else {
+ *link_up = true;
+ }
+ if (*link_up) {
+ links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+ if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+ *link_up = true;
+ } else {
+ *link_up = false;
+ }
+ }
+ }
+
+ if (*link_up) {
+ if (TCALL(hw, mac.ops.get_media_type) == txgbe_media_type_copper &&
+ ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI)) {
+ if ((value & 0xc000) == 0xc000) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ } else if ((value & 0xc000) == 0x8000) {
+ *speed = TXGBE_LINK_SPEED_1GB_FULL;
+ } else if ((value & 0xc000) == 0x4000) {
+ *speed = TXGBE_LINK_SPEED_100_FULL;
+ } else if ((value & 0xc000) == 0x0000) {
+ *speed = TXGBE_LINK_SPEED_10_FULL;
+ }
+ } else {
+ if ((links_reg & TXGBE_CFG_PORT_ST_LINK_10G) ==
+ TXGBE_CFG_PORT_ST_LINK_10G) {
+ *speed = TXGBE_LINK_SPEED_10GB_FULL;
+ } else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_1G) ==
+ TXGBE_CFG_PORT_ST_LINK_1G){
+ *speed = TXGBE_LINK_SPEED_1GB_FULL;
+ } else if ((links_reg & TXGBE_CFG_PORT_ST_LINK_100M) ==
+ TXGBE_CFG_PORT_ST_LINK_100M){
+ *speed = TXGBE_LINK_SPEED_100_FULL;
+ } else
+ *speed = TXGBE_LINK_SPEED_10_FULL;
+ }
+ } else
+ *speed = TXGBE_LINK_SPEED_UNKNOWN;
+
+ return 0;
+}
+
+/**
+ * txgbe_setup_eee - Enable/disable EEE support
+ * @hw: pointer to the HW structure
+ * @enable_eee: boolean flag to enable EEE
+ *
+ * Enable/disable EEE based on enable_eee flag.
+ * Auto-negotiation must be started after BASE-T EEE bits in PHY register 7.3C
+ * are modified.
+ *
+ **/
+s32 txgbe_setup_eee(struct txgbe_hw *hw, bool enable_eee)
+{
+ /* fix eee */
+ UNREFERENCED_PARAMETER(hw);
+ UNREFERENCED_PARAMETER(enable_eee);
+ DEBUGFUNC("\n");
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
new file mode 100644
index 0000000000000..97ce62a2cd26a
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_hw.h
@@ -0,0 +1,264 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#ifndef _TXGBE_HW_H_
+#define _TXGBE_HW_H_
+
+#define TXGBE_EMC_INTERNAL_DATA 0x00
+#define TXGBE_EMC_INTERNAL_THERM_LIMIT 0x20
+#define TXGBE_EMC_DIODE1_DATA 0x01
+#define TXGBE_EMC_DIODE1_THERM_LIMIT 0x19
+#define TXGBE_EMC_DIODE2_DATA 0x23
+#define TXGBE_EMC_DIODE2_THERM_LIMIT 0x1A
+#define TXGBE_EMC_DIODE3_DATA 0x2A
+#define TXGBE_EMC_DIODE3_THERM_LIMIT 0x30
+
+/**
+ * Packet Type decoding
+ **/
+/* txgbe_dec_ptype.mac: outer mac */
+enum txgbe_dec_ptype_mac {
+ TXGBE_DEC_PTYPE_MAC_IP = 0,
+ TXGBE_DEC_PTYPE_MAC_L2 = 2,
+ TXGBE_DEC_PTYPE_MAC_FCOE = 3,
+};
+
+/* txgbe_dec_ptype.[e]ip: outer&encaped ip */
+#define TXGBE_DEC_PTYPE_IP_FRAG (0x4)
+enum txgbe_dec_ptype_ip {
+ TXGBE_DEC_PTYPE_IP_NONE = 0,
+ TXGBE_DEC_PTYPE_IP_IPV4 = 1,
+ TXGBE_DEC_PTYPE_IP_IPV6 = 2,
+ TXGBE_DEC_PTYPE_IP_FGV4 =
+ (TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV4),
+ TXGBE_DEC_PTYPE_IP_FGV6 =
+ (TXGBE_DEC_PTYPE_IP_FRAG | TXGBE_DEC_PTYPE_IP_IPV6),
+};
+
+/* txgbe_dec_ptype.etype: encaped type */
+enum txgbe_dec_ptype_etype {
+ TXGBE_DEC_PTYPE_ETYPE_NONE = 0,
+ TXGBE_DEC_PTYPE_ETYPE_IPIP = 1, /* IP+IP */
+ TXGBE_DEC_PTYPE_ETYPE_IG = 2, /* IP+GRE */
+ TXGBE_DEC_PTYPE_ETYPE_IGM = 3, /* IP+GRE+MAC */
+ TXGBE_DEC_PTYPE_ETYPE_IGMV = 4, /* IP+GRE+MAC+VLAN */
+};
+
+/* txgbe_dec_ptype.proto: payload proto */
+enum txgbe_dec_ptype_prot {
+ TXGBE_DEC_PTYPE_PROT_NONE = 0,
+ TXGBE_DEC_PTYPE_PROT_UDP = 1,
+ TXGBE_DEC_PTYPE_PROT_TCP = 2,
+ TXGBE_DEC_PTYPE_PROT_SCTP = 3,
+ TXGBE_DEC_PTYPE_PROT_ICMP = 4,
+ TXGBE_DEC_PTYPE_PROT_TS = 5, /* time sync */
+};
+
+/* txgbe_dec_ptype.layer: payload layer */
+enum txgbe_dec_ptype_layer {
+ TXGBE_DEC_PTYPE_LAYER_NONE = 0,
+ TXGBE_DEC_PTYPE_LAYER_PAY2 = 1,
+ TXGBE_DEC_PTYPE_LAYER_PAY3 = 2,
+ TXGBE_DEC_PTYPE_LAYER_PAY4 = 3,
+};
+
+struct txgbe_dec_ptype {
+ u32 ptype:8;
+ u32 known:1;
+ u32 mac:2; /* outer mac */
+ u32 ip:3; /* outer ip*/
+ u32 etype:3; /* encaped type */
+ u32 eip:3; /* encaped ip */
+ u32 prot:4; /* payload proto */
+ u32 layer:3; /* payload layer */
+};
+typedef struct txgbe_dec_ptype txgbe_dptype;
+
+
+void txgbe_dcb_get_rtrup2tc(struct txgbe_hw *hw, u8 *map);
+u16 txgbe_get_pcie_msix_count(struct txgbe_hw *hw);
+s32 txgbe_init_hw(struct txgbe_hw *hw);
+s32 txgbe_start_hw(struct txgbe_hw *hw);
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw);
+s32 txgbe_read_pba_string(struct txgbe_hw *hw, u8 *pba_num,
+ u32 pba_num_size);
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
+s32 txgbe_get_bus_info(struct txgbe_hw *hw);
+void txgbe_set_pci_config_data(struct txgbe_hw *hw, u16 link_status);
+void txgbe_set_lan_id_multi_port_pcie(struct txgbe_hw *hw);
+s32 txgbe_stop_adapter(struct txgbe_hw *hw);
+
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index);
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index);
+
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u64 pools,
+ u32 enable_addr);
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw);
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+ u32 mc_addr_count,
+ txgbe_mc_addr_itr func, bool clear);
+s32 txgbe_update_uc_addr_list(struct txgbe_hw *hw, u8 *addr_list,
+ u32 addr_count, txgbe_mc_addr_itr func);
+s32 txgbe_enable_mc(struct txgbe_hw *hw);
+s32 txgbe_disable_mc(struct txgbe_hw *hw);
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
+
+s32 txgbe_fc_enable(struct txgbe_hw *hw);
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw);
+void txgbe_fc_autoneg(struct txgbe_hw *hw);
+s32 txgbe_setup_fc(struct txgbe_hw *hw);
+
+s32 txgbe_validate_mac_addr(u8 *mac_addr);
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
+s32 txgbe_disable_pcie_master(struct txgbe_hw *hw);
+
+
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
+s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
+
+s32 txgbe_set_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq);
+s32 txgbe_set_vmdq_san_mac(struct txgbe_hw *hw, u32 vmdq);
+s32 txgbe_clear_vmdq(struct txgbe_hw *hw, u32 rar, u32 vmdq);
+s32 txgbe_insert_mac_addr(struct txgbe_hw *hw, u8 *addr, u32 vmdq);
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw);
+s32 txgbe_set_vfta(struct txgbe_hw *hw, u32 vlan,
+ u32 vind, bool vlan_on);
+s32 txgbe_set_vlvf(struct txgbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, bool *vfta_changed);
+s32 txgbe_clear_vfta(struct txgbe_hw *hw);
+s32 txgbe_find_vlvf_slot(struct txgbe_hw *hw, u32 vlan);
+
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+ u16 *wwpn_prefix);
+
+void txgbe_set_mac_anti_spoofing(struct txgbe_hw *hw, bool enable, int pf);
+void txgbe_set_vlan_anti_spoofing(struct txgbe_hw *hw, bool enable, int vf);
+void txgbe_set_ethertype_anti_spoofing(struct txgbe_hw *hw,
+ bool enable, int vf);
+s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps);
+void txgbe_set_rxpba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+ int strategy);
+s32 txgbe_set_fw_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
+ u8 build, u8 ver);
+s32 txgbe_reset_hostif(struct txgbe_hw *hw);
+u8 txgbe_calculate_checksum(u8 *buffer, u32 length);
+s32 txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
+ u32 length, u32 timeout, bool return_data);
+
+void txgbe_clear_tx_pending(struct txgbe_hw *hw);
+void txgbe_stop_mac_link_on_d3(struct txgbe_hw *hw);
+bool txgbe_mng_present(struct txgbe_hw *hw);
+bool txgbe_check_mng_access(struct txgbe_hw *hw);
+
+s32 txgbe_get_thermal_sensor_data(struct txgbe_hw *hw);
+s32 txgbe_init_thermal_sensor_thresh(struct txgbe_hw *hw);
+void txgbe_enable_rx(struct txgbe_hw *hw);
+void txgbe_disable_rx(struct txgbe_hw *hw);
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete);
+int txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit);
+
+/* @txgbe_api.h */
+s32 txgbe_reinit_fdir_tables(struct txgbe_hw *hw);
+s32 txgbe_init_fdir_signature(struct txgbe_hw *hw, u32 fdirctrl);
+s32 txgbe_init_fdir_perfect(struct txgbe_hw *hw, u32 fdirctrl,
+ bool cloud_mode);
+s32 txgbe_fdir_add_signature_filter(struct txgbe_hw *hw,
+ union txgbe_atr_hash_dword input,
+ union txgbe_atr_hash_dword common,
+ u8 queue);
+s32 txgbe_fdir_set_input_mask(struct txgbe_hw *hw,
+ union txgbe_atr_input *input_mask, bool cloud_mode);
+s32 txgbe_fdir_write_perfect_filter(struct txgbe_hw *hw,
+ union txgbe_atr_input *input,
+ u16 soft_id, u8 queue, bool cloud_mode);
+s32 txgbe_fdir_erase_perfect_filter(struct txgbe_hw *hw,
+ union txgbe_atr_input *input,
+ u16 soft_id);
+s32 txgbe_fdir_add_perfect_filter(struct txgbe_hw *hw,
+ union txgbe_atr_input *input,
+ union txgbe_atr_input *mask,
+ u16 soft_id,
+ u8 queue,
+ bool cloud_mode);
+void txgbe_atr_compute_perfect_hash(union txgbe_atr_input *input,
+ union txgbe_atr_input *mask);
+u32 txgbe_atr_compute_sig_hash(union txgbe_atr_hash_dword input,
+ union txgbe_atr_hash_dword common);
+
+s32 txgbe_get_link_capabilities(struct txgbe_hw *hw,
+ u32 *speed, bool *autoneg);
+enum txgbe_media_type txgbe_get_media_type(struct txgbe_hw *hw);
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw,
+ u32 speed);
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw, u32 speed,
+ bool autoneg_wait_to_complete);
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
+s32 txgbe_reset_hw(struct txgbe_hw *hw);
+s32 txgbe_identify_phy(struct txgbe_hw *hw);
+s32 txgbe_init_phy_ops(struct txgbe_hw *hw);
+s32 txgbe_enable_rx_dma(struct txgbe_hw *hw, u32 regval);
+s32 txgbe_init_ops(struct txgbe_hw *hw);
+s32 txgbe_setup_eee(struct txgbe_hw *hw, bool enable_eee);
+
+s32 txgbe_init_flash_params(struct txgbe_hw *hw);
+s32 txgbe_read_flash_buffer(struct txgbe_hw *hw, u32 offset,
+ u32 dwords, u32 *data);
+s32 txgbe_write_flash_buffer(struct txgbe_hw *hw, u32 offset,
+ u32 dwords, u32 *data);
+
+s32 txgbe_read_eeprom(struct txgbe_hw *hw,
+ u16 offset, u16 *data);
+s32 txgbe_read_eeprom_buffer(struct txgbe_hw *hw, u16 offset,
+ u16 words, u16 *data);
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw);
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
+ u16 *checksum_val);
+s32 txgbe_update_flash(struct txgbe_hw *hw);
+s32 txgbe_write_ee_hostif_buffer(struct txgbe_hw *hw,
+ u16 offset, u16 words, u16 *data);
+s32 txgbe_write_ee_hostif(struct txgbe_hw *hw, u16 offset,
+ u16 data);
+s32 txgbe_read_ee_hostif_buffer(struct txgbe_hw *hw,
+ u16 offset, u16 words, u16 *data);
+s32 txgbe_read_ee_hostif(struct txgbe_hw *hw, u16 offset, u16 *data);
+u32 txgbe_rd32_epcs(struct txgbe_hw *hw, u32 addr);
+void txgbe_wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data);
+void txgbe_wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data);
+u32 rd32_ephy(struct txgbe_hw *hw, u32 addr);
+
+s32 txgbe_upgrade_flash_hostif(struct txgbe_hw *hw, u32 region,
+ const u8 *data, u32 size);
+
+s32 txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg);
+s32 txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg);
+
+s32 txgbe_set_link_to_kx(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg);
+
+
+#endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c b/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
new file mode 100644
index 0000000000000..bb402e45557eb
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
@@ -0,0 +1,959 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_lib.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#include "txgbe.h"
+
+/**
+ * txgbe_cache_ring_dcb_vmdq - Descriptor ring to register mapping for VMDq
+ * @adapter: board private structure to initialize
+ *
+ * Cache the descriptor ring offsets for VMDq to the assigned rings. It
+ * will also try to cache the proper offsets if RSS/FCoE are enabled along
+ * with VMDq.
+ *
+ **/
+static bool txgbe_cache_ring_dcb_vmdq(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ int i;
+ u16 reg_idx;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ /* verify we have DCB enabled before proceeding */
+ if (tcs <= 1)
+ return false;
+
+ /* verify we have VMDq enabled before proceeding */
+ if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED))
+ return false;
+
+ /* start at VMDq register offset for SR-IOV enabled setups */
+ reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
+ for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) {
+ /* If we are greater than indices move to next pool */
+ if ((reg_idx & ~vmdq->mask) >= tcs)
+ reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+ adapter->rx_ring[i]->reg_idx = reg_idx;
+ }
+
+ reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
+ for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) {
+ /* If we are greater than indices move to next pool */
+ if ((reg_idx & ~vmdq->mask) >= tcs)
+ reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+ adapter->tx_ring[i]->reg_idx = reg_idx;
+ }
+
+ return true;
+}
+
+/* txgbe_get_first_reg_idx - Return first register index associated with ring */
+static void txgbe_get_first_reg_idx(struct txgbe_adapter *adapter, u8 tc,
+ u16 *tx, u16 *rx)
+{
+ struct net_device *dev = adapter->netdev;
+ u8 num_tcs = netdev_get_num_tc(dev);
+
+ *tx = 0;
+ *rx = 0;
+
+
+ if (num_tcs > 4) {
+ /*
+ * TCs : TC0/1 TC2/3 TC4-7
+ * TxQs/TC: 32 16 8
+ * RxQs/TC: 16 16 16
+ */
+ *rx = tc << 4;
+ if (tc < 3)
+ *tx = tc << 5; /* 0, 32, 64 */
+ else if (tc < 5)
+ *tx = (tc + 2) << 4; /* 80, 96 */
+ else
+ *tx = (tc + 8) << 3; /* 104, 112, 120 */
+ } else {
+ /*
+ * TCs : TC0 TC1 TC2/3
+ * TxQs/TC: 64 32 16
+ * RxQs/TC: 32 32 32
+ */
+ *rx = tc << 5;
+ if (tc < 2)
+ *tx = tc << 6; /* 0, 64 */
+ else
+ *tx = (tc + 4) << 4; /* 96, 112 */
+ }
+
+}
+
+/**
+ * txgbe_cache_ring_dcb - Descriptor ring to register mapping for DCB
+ * @adapter: board private structure to initialize
+ *
+ * Cache the descriptor ring offsets for DCB to the assigned rings.
+ *
+ **/
+static bool txgbe_cache_ring_dcb(struct txgbe_adapter *adapter)
+{
+ int tc, offset, rss_i, i;
+ u16 tx_idx, rx_idx;
+ struct net_device *dev = adapter->netdev;
+ u8 num_tcs = netdev_get_num_tc(dev);
+
+ if (num_tcs <= 1)
+ return false;
+
+ rss_i = adapter->ring_feature[RING_F_RSS].indices;
+
+ for (tc = 0, offset = 0; tc < num_tcs; tc++, offset += rss_i) {
+ txgbe_get_first_reg_idx(adapter, (u8)tc, &tx_idx, &rx_idx);
+ for (i = 0; i < rss_i; i++, tx_idx++, rx_idx++) {
+ adapter->tx_ring[offset + i]->reg_idx = tx_idx;
+ adapter->rx_ring[offset + i]->reg_idx = rx_idx;
+ adapter->tx_ring[offset + i]->dcb_tc = (u8)tc;
+ adapter->rx_ring[offset + i]->dcb_tc = (u8)tc;
+ }
+ }
+
+ return true;
+}
+
+/**
+ * txgbe_cache_ring_vmdq - Descriptor ring to register mapping for VMDq
+ * @adapter: board private structure to initialize
+ *
+ * Cache the descriptor ring offsets for VMDq to the assigned rings. It
+ * will also try to cache the proper offsets if RSS/FCoE/SRIOV are enabled along
+ * with VMDq.
+ *
+ **/
+static bool txgbe_cache_ring_vmdq(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ struct txgbe_ring_feature *rss = &adapter->ring_feature[RING_F_RSS];
+ int i;
+ u16 reg_idx;
+
+ /* only proceed if VMDq is enabled */
+ if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED))
+ return false;
+
+ /* start at VMDq register offset for SR-IOV enabled setups */
+ reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
+ for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) {
+ /* If we are greater than indices move to next pool */
+ if ((reg_idx & ~vmdq->mask) >= rss->indices)
+ reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+ adapter->rx_ring[i]->reg_idx = reg_idx;
+ }
+
+ reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
+ for (i = 0; i < adapter->num_tx_queues; i++, reg_idx++) {
+ /* If we are greater than indices move to next pool */
+ if ((reg_idx & rss->mask) >= rss->indices)
+ reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+ adapter->tx_ring[i]->reg_idx = reg_idx;
+ }
+
+ return true;
+}
+
+/**
+ * txgbe_cache_ring_rss - Descriptor ring to register mapping for RSS
+ * @adapter: board private structure to initialize
+ *
+ * Cache the descriptor ring offsets for RSS, ATR, FCoE, and SR-IOV.
+ *
+ **/
+static bool txgbe_cache_ring_rss(struct txgbe_adapter *adapter)
+{
+ u16 i;
+
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ adapter->rx_ring[i]->reg_idx = i;
+
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ adapter->tx_ring[i]->reg_idx = i;
+
+ return true;
+}
+
+/**
+ * txgbe_cache_ring_register - Descriptor ring to register mapping
+ * @adapter: board private structure to initialize
+ *
+ * Once we know the feature-set enabled for the device, we'll cache
+ * the register offset the descriptor ring is assigned to.
+ *
+ * Note, the order the various feature calls is important. It must start with
+ * the "most" features enabled at the same time, then trickle down to the
+ * least amount of features turned on at once.
+ **/
+static void txgbe_cache_ring_register(struct txgbe_adapter *adapter)
+{
+ if (txgbe_cache_ring_dcb_vmdq(adapter))
+ return;
+
+ if (txgbe_cache_ring_dcb(adapter))
+ return;
+
+ if (txgbe_cache_ring_vmdq(adapter))
+ return;
+
+ txgbe_cache_ring_rss(adapter);
+}
+
+#define TXGBE_RSS_64Q_MASK 0x3F
+#define TXGBE_RSS_16Q_MASK 0xF
+#define TXGBE_RSS_8Q_MASK 0x7
+#define TXGBE_RSS_4Q_MASK 0x3
+#define TXGBE_RSS_2Q_MASK 0x1
+#define TXGBE_RSS_DISABLED_MASK 0x0
+
+/**
+ * txgbe_set_dcb_vmdq_queues: Allocate queues for VMDq devices w/ DCB
+ * @adapter: board private structure to initialize
+ *
+ * When VMDq (Virtual Machine Devices queue) is enabled, allocate queues
+ * and VM pools where appropriate. Also assign queues based on DCB
+ * priorities and map accordingly..
+ *
+ **/
+static bool txgbe_set_dcb_vmdq_queues(struct txgbe_adapter *adapter)
+{
+ u16 i;
+ u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit;
+ u16 vmdq_m = 0;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ /* verify we have DCB enabled before proceeding */
+ if (tcs <= 1)
+ return false;
+
+ /* verify we have VMDq enabled before proceeding */
+ if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED))
+ return false;
+
+ /* Add starting offset to total pool count */
+ vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset;
+
+ /* 16 pools w/ 8 TC per pool */
+ if (tcs > 4) {
+ vmdq_i = min_t(u16, vmdq_i, 16);
+ vmdq_m = TXGBE_VMDQ_8Q_MASK;
+ /* 32 pools w/ 4 TC per pool */
+ } else {
+ vmdq_i = min_t(u16, vmdq_i, 32);
+ vmdq_m = TXGBE_VMDQ_4Q_MASK;
+ }
+
+ /* remove the starting offset from the pool count */
+ vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset;
+
+ /* save features for later use */
+ adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i;
+ adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m;
+
+ /*
+ * We do not support DCB, VMDq, and RSS all simultaneously
+ * so we will disable RSS since it is the lowest priority
+ */
+ adapter->ring_feature[RING_F_RSS].indices = 1;
+ adapter->ring_feature[RING_F_RSS].mask = TXGBE_RSS_DISABLED_MASK;
+
+ adapter->queues_per_pool = tcs;
+
+ adapter->num_tx_queues = vmdq_i * tcs;
+ adapter->num_rx_queues = vmdq_i * tcs;
+
+ /* disable ATR as it is not supported when VMDq is enabled */
+ adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+
+ /* configure TC to queue mapping */
+ for (i = 0; i < tcs; i++)
+ netdev_set_tc_queue(adapter->netdev, (u8)i, 1, i);
+
+ return true;
+}
+
+/**
+ * txgbe_set_dcb_queues: Allocate queues for a DCB-enabled device
+ * @adapter: board private structure to initialize
+ *
+ * When DCB (Data Center Bridging) is enabled, allocate queues for
+ * each traffic class. If multiqueue isn't available,then abort DCB
+ * initialization.
+ *
+ * This function handles all combinations of DCB, RSS, and FCoE.
+ *
+ **/
+static bool txgbe_set_dcb_queues(struct txgbe_adapter *adapter)
+{
+ struct net_device *dev = adapter->netdev;
+ struct txgbe_ring_feature *f;
+ u16 rss_i, rss_m, i;
+ u16 tcs;
+
+ /* Map queue offset and counts onto allocated tx queues */
+ tcs = netdev_get_num_tc(dev);
+
+ if (tcs <= 1)
+ return false;
+
+ /* determine the upper limit for our current DCB mode */
+ rss_i = dev->num_tx_queues / tcs;
+
+ if (tcs > 4) {
+ /* 8 TC w/ 8 queues per TC */
+ rss_i = min_t(u16, rss_i, 8);
+ rss_m = TXGBE_RSS_8Q_MASK;
+ } else {
+ /* 4 TC w/ 16 queues per TC */
+ rss_i = min_t(u16, rss_i, 16);
+ rss_m = TXGBE_RSS_16Q_MASK;
+ }
+
+ /* set RSS mask and indices */
+ f = &adapter->ring_feature[RING_F_RSS];
+ rss_i = min_t(u16, rss_i, f->limit);
+ f->indices = rss_i;
+ f->mask = rss_m;
+
+ /* disable ATR as it is not supported when DCB is enabled */
+ adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+
+ for (i = 0; i < tcs; i++)
+ netdev_set_tc_queue(dev, (u8)i, rss_i, rss_i * i);
+
+ adapter->num_tx_queues = rss_i * tcs;
+ adapter->num_rx_queues = rss_i * tcs;
+
+ return true;
+}
+
+/**
+ * txgbe_set_vmdq_queues: Allocate queues for VMDq devices
+ * @adapter: board private structure to initialize
+ *
+ * When VMDq (Virtual Machine Devices queue) is enabled, allocate queues
+ * and VM pools where appropriate. If RSS is available, then also try and
+ * enable RSS and map accordingly.
+ *
+ **/
+static bool txgbe_set_vmdq_queues(struct txgbe_adapter *adapter)
+{
+ u16 vmdq_i = adapter->ring_feature[RING_F_VMDQ].limit;
+ u16 vmdq_m = 0;
+ u16 rss_i = adapter->ring_feature[RING_F_RSS].limit;
+ u16 rss_m = TXGBE_RSS_DISABLED_MASK;
+
+ /* only proceed if VMDq is enabled */
+ if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED))
+ return false;
+ /* Add starting offset to total pool count */
+ vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset;
+
+ /* double check we are limited to maximum pools */
+ vmdq_i = min_t(u16, TXGBE_MAX_VMDQ_INDICES, vmdq_i);
+
+ /* 64 pool mode with 2 queues per pool, or
+ * 16/32/64 pool mode with 1 queue per pool */
+ if ((vmdq_i > 32) || (rss_i < 4)) {
+ vmdq_m = TXGBE_VMDQ_2Q_MASK;
+ rss_m = TXGBE_RSS_2Q_MASK;
+ rss_i = min_t(u16, rss_i, 2);
+ /* 32 pool mode with 4 queues per pool */
+ } else {
+ vmdq_m = TXGBE_VMDQ_4Q_MASK;
+ rss_m = TXGBE_RSS_4Q_MASK;
+ rss_i = 4;
+ }
+
+ /* remove the starting offset from the pool count */
+ vmdq_i -= adapter->ring_feature[RING_F_VMDQ].offset;
+
+ /* save features for later use */
+ adapter->ring_feature[RING_F_VMDQ].indices = vmdq_i;
+ adapter->ring_feature[RING_F_VMDQ].mask = vmdq_m;
+
+ /* limit RSS based on user input and save for later use */
+ adapter->ring_feature[RING_F_RSS].indices = rss_i;
+ adapter->ring_feature[RING_F_RSS].mask = rss_m;
+
+ adapter->queues_per_pool = rss_i;
+
+ adapter->num_rx_queues = vmdq_i * rss_i;
+ adapter->num_tx_queues = vmdq_i * rss_i;
+
+ /* disable ATR as it is not supported when VMDq is enabled */
+ adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+
+ return true;
+}
+
+/**
+ * txgbe_set_rss_queues: Allocate queues for RSS
+ * @adapter: board private structure to initialize
+ *
+ * This is our "base" multiqueue mode. RSS (Receive Side Scaling) will try
+ * to allocate one Rx queue per CPU, and if available, one Tx queue per CPU.
+ *
+ **/
+static bool txgbe_set_rss_queues(struct txgbe_adapter *adapter)
+{
+ struct txgbe_ring_feature *f;
+ u16 rss_i;
+
+ /* set mask for 16 queue limit of RSS */
+ f = &adapter->ring_feature[RING_F_RSS];
+ rss_i = f->limit;
+
+ f->indices = rss_i;
+ f->mask = TXGBE_RSS_64Q_MASK;
+
+ /* disable ATR by default, it will be configured below */
+ adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+
+ /*
+ * Use Flow Director in addition to RSS to ensure the best
+ * distribution of flows across cores, even when an FDIR flow
+ * isn't matched.
+ */
+ if (rss_i > 1 && adapter->atr_sample_rate) {
+ f = &adapter->ring_feature[RING_F_FDIR];
+
+ rss_i = f->indices = f->limit;
+
+ if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+ adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE;
+ }
+
+ adapter->num_rx_queues = rss_i;
+ adapter->num_tx_queues = rss_i;
+
+ return true;
+}
+
+/*
+ * txgbe_set_num_queues: Allocate queues for device, feature dependent
+ * @adapter: board private structure to initialize
+ *
+ * This is the top level queue allocation routine. The order here is very
+ * important, starting with the "most" number of features turned on at once,
+ * and ending with the smallest set of features. This way large combinations
+ * can be allocated if they're turned on, and smaller combinations are the
+ * fallthrough conditions.
+ *
+ **/
+static void txgbe_set_num_queues(struct txgbe_adapter *adapter)
+{
+ /* Start with base case */
+ adapter->num_rx_queues = 1;
+ adapter->num_tx_queues = 1;
+ adapter->queues_per_pool = 1;
+
+ if (txgbe_set_dcb_vmdq_queues(adapter))
+ return;
+
+ if (txgbe_set_dcb_queues(adapter))
+ return;
+
+ if (txgbe_set_vmdq_queues(adapter))
+ return;
+
+ txgbe_set_rss_queues(adapter);
+}
+
+/**
+ * txgbe_acquire_msix_vectors - acquire MSI-X vectors
+ * @adapter: board private structure
+ *
+ * Attempts to acquire a suitable range of MSI-X vector interrupts. Will
+ * return a negative error code if unable to acquire MSI-X vectors for any
+ * reason.
+ */
+static int txgbe_acquire_msix_vectors(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i, vectors, vector_threshold;
+
+ if (!(adapter->flags & TXGBE_FLAG_MSIX_CAPABLE))
+ return -EOPNOTSUPP;
+
+ /* We start by asking for one vector per queue pair */
+ vectors = max(adapter->num_rx_queues, adapter->num_tx_queues);
+
+ /* It is easy to be greedy for MSI-X vectors. However, it really
+ * doesn't do much good if we have a lot more vectors than CPUs. We'll
+ * be somewhat conservative and only ask for (roughly) the same number
+ * of vectors as there are CPUs.
+ */
+ vectors = min_t(int, vectors, num_online_cpus());
+
+ /* Some vectors are necessary for non-queue interrupts */
+ vectors += NON_Q_VECTORS;
+
+ /* Hardware can only support a maximum of hw.mac->max_msix_vectors.
+ * With features such as RSS and VMDq, we can easily surpass the
+ * number of Rx and Tx descriptor queues supported by our device.
+ * Thus, we cap the maximum in the rare cases where the CPU count also
+ * exceeds our vector limit
+ */
+ vectors = min_t(int, vectors, hw->mac.max_msix_vectors);
+
+ /* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0]
+ * handler, and (2) an Other (Link Status Change, etc.) handler.
+ */
+ vector_threshold = MIN_MSIX_COUNT;
+
+ adapter->msix_entries = kcalloc(vectors,
+ sizeof(struct msix_entry),
+ GFP_KERNEL);
+ if (!adapter->msix_entries)
+ return -ENOMEM;
+
+ for (i = 0; i < vectors; i++)
+ adapter->msix_entries[i].entry = i;
+
+ vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
+ vector_threshold, vectors);
+ if (vectors < 0) {
+ /* A negative count of allocated vectors indicates an error in
+ * acquiring within the specified range of MSI-X vectors */
+ e_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n",
+ vectors);
+
+ adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+ kfree(adapter->msix_entries);
+ adapter->msix_entries = NULL;
+
+ return vectors;
+ }
+
+ /* we successfully allocated some number of vectors within our
+ * requested range.
+ */
+ adapter->flags |= TXGBE_FLAG_MSIX_ENABLED;
+
+ /* Adjust for only the vectors we'll use, which is minimum
+ * of max_q_vectors, or the number of vectors we were allocated.
+ */
+ vectors -= NON_Q_VECTORS;
+ adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors);
+
+ return 0;
+}
+
+static void txgbe_add_ring(struct txgbe_ring *ring,
+ struct txgbe_ring_container *head)
+{
+ ring->next = head->ring;
+ head->ring = ring;
+ head->count++;
+}
+
+/**
+ * txgbe_alloc_q_vector - Allocate memory for a single interrupt vector
+ * @adapter: board private structure to initialize
+ * @v_count: q_vectors allocated on adapter, used for ring interleaving
+ * @v_idx: index of vector in adapter struct
+ * @txr_count: total number of Tx rings to allocate
+ * @txr_idx: index of first Tx ring to allocate
+ * @rxr_count: total number of Rx rings to allocate
+ * @rxr_idx: index of first Rx ring to allocate
+ *
+ * We allocate one q_vector. If allocation fails we return -ENOMEM.
+ **/
+static int txgbe_alloc_q_vector(struct txgbe_adapter *adapter,
+ unsigned int v_count, unsigned int v_idx,
+ unsigned int txr_count, unsigned int txr_idx,
+ unsigned int rxr_count, unsigned int rxr_idx)
+{
+ struct txgbe_q_vector *q_vector;
+ struct txgbe_ring *ring;
+ int node = -1;
+ int cpu = -1;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
+ int ring_count, size;
+
+ /* note this will allocate space for the ring structure as well! */
+ ring_count = txr_count + rxr_count;
+ size = sizeof(struct txgbe_q_vector) +
+ (sizeof(struct txgbe_ring) * ring_count);
+
+ /* customize cpu for Flow Director mapping */
+ if ((tcs <= 1) && !(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)) {
+ u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
+ if (rss_i > 1 && adapter->atr_sample_rate) {
+ if (cpu_online(v_idx)) {
+ cpu = v_idx;
+ node = cpu_to_node(cpu);
+ }
+ }
+ }
+
+ /* allocate q_vector and rings */
+ q_vector = kzalloc_node(size, GFP_KERNEL, node);
+ if (!q_vector)
+ q_vector = kzalloc(size, GFP_KERNEL);
+ if (!q_vector)
+ return -ENOMEM;
+
+ /* setup affinity mask and node */
+ if (cpu != -1)
+ cpumask_set_cpu(cpu, &q_vector->affinity_mask);
+ q_vector->numa_node = node;
+
+ /* initialize CPU for DCA */
+ q_vector->cpu = -1;
+
+ /* initialize NAPI */
+ netif_napi_add(adapter->netdev, &q_vector->napi,
+ txgbe_poll, 64);
+
+ /* tie q_vector and adapter together */
+ adapter->q_vector[v_idx] = q_vector;
+ q_vector->adapter = adapter;
+ q_vector->v_idx = v_idx;
+
+ /* initialize work limits */
+ q_vector->tx.work_limit = adapter->tx_work_limit;
+ q_vector->rx.work_limit = adapter->rx_work_limit;
+
+ /* initialize pointer to rings */
+ ring = q_vector->ring;
+
+ /* intialize ITR */
+ if (txr_count && !rxr_count) {
+ /* tx only vector */
+ if (adapter->tx_itr_setting == 1)
+ q_vector->itr = TXGBE_12K_ITR;
+ else
+ q_vector->itr = adapter->tx_itr_setting;
+ } else {
+ /* rx or rx/tx vector */
+ if (adapter->rx_itr_setting == 1)
+ q_vector->itr = TXGBE_20K_ITR;
+ else
+ q_vector->itr = adapter->rx_itr_setting;
+ }
+
+ while (txr_count) {
+ /* assign generic ring traits */
+ ring->dev = pci_dev_to_dev(adapter->pdev);
+ ring->netdev = adapter->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* update q_vector Tx values */
+ txgbe_add_ring(ring, &q_vector->tx);
+
+ /* apply Tx specific ring traits */
+ ring->count = adapter->tx_ring_count;
+ if (adapter->num_vmdqs > 1)
+ ring->queue_index =
+ txr_idx % adapter->queues_per_pool;
+ else
+ ring->queue_index = txr_idx;
+
+ /* assign ring to adapter */
+ adapter->tx_ring[txr_idx] = ring;
+
+ /* update count and index */
+ txr_count--;
+ txr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ while (rxr_count) {
+ /* assign generic ring traits */
+ ring->dev = pci_dev_to_dev(adapter->pdev);
+ ring->netdev = adapter->netdev;
+
+ /* configure backlink on ring */
+ ring->q_vector = q_vector;
+
+ /* update q_vector Rx values */
+ txgbe_add_ring(ring, &q_vector->rx);
+
+ /* apply Rx specific ring traits */
+ ring->count = adapter->rx_ring_count;
+ if (adapter->num_vmdqs > 1)
+ ring->queue_index =
+ rxr_idx % adapter->queues_per_pool;
+ else
+ ring->queue_index = rxr_idx;
+
+ /* assign ring to adapter */
+ adapter->rx_ring[rxr_idx] = ring;
+
+ /* update count and index */
+ rxr_count--;
+ rxr_idx += v_count;
+
+ /* push pointer to next ring */
+ ring++;
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_free_q_vector - Free memory allocated for specific interrupt vector
+ * @adapter: board private structure to initialize
+ * @v_idx: Index of vector to be freed
+ *
+ * This function frees the memory allocated to the q_vector. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void txgbe_free_q_vector(struct txgbe_adapter *adapter, int v_idx)
+{
+ struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
+ struct txgbe_ring *ring;
+
+ txgbe_for_each_ring(ring, q_vector->tx)
+ adapter->tx_ring[ring->queue_index] = NULL;
+
+ txgbe_for_each_ring(ring, q_vector->rx)
+ adapter->rx_ring[ring->queue_index] = NULL;
+
+ adapter->q_vector[v_idx] = NULL;
+ netif_napi_del(&q_vector->napi);
+ kfree_rcu(q_vector, rcu);
+}
+
+/**
+ * txgbe_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @adapter: board private structure to initialize
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ **/
+static int txgbe_alloc_q_vectors(struct txgbe_adapter *adapter)
+{
+ unsigned int q_vectors = adapter->num_q_vectors;
+ unsigned int rxr_remaining = adapter->num_rx_queues;
+ unsigned int txr_remaining = adapter->num_tx_queues;
+ unsigned int rxr_idx = 0, txr_idx = 0, v_idx = 0;
+ int err;
+
+ if (q_vectors >= (rxr_remaining + txr_remaining)) {
+ for (; rxr_remaining; v_idx++) {
+ err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx,
+ 0, 0, 1, rxr_idx);
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining--;
+ rxr_idx++;
+ }
+ }
+
+ for (; v_idx < q_vectors; v_idx++) {
+ int rqpv = DIV_ROUND_UP(rxr_remaining, q_vectors - v_idx);
+ int tqpv = DIV_ROUND_UP(txr_remaining, q_vectors - v_idx);
+ err = txgbe_alloc_q_vector(adapter, q_vectors, v_idx,
+ tqpv, txr_idx,
+ rqpv, rxr_idx);
+
+ if (err)
+ goto err_out;
+
+ /* update counts and index */
+ rxr_remaining -= rqpv;
+ txr_remaining -= tqpv;
+ rxr_idx++;
+ txr_idx++;
+ }
+
+ return 0;
+
+err_out:
+ adapter->num_tx_queues = 0;
+ adapter->num_rx_queues = 0;
+ adapter->num_q_vectors = 0;
+
+ while (v_idx--)
+ txgbe_free_q_vector(adapter, v_idx);
+
+ return -ENOMEM;
+}
+
+/**
+ * txgbe_free_q_vectors - Free memory allocated for interrupt vectors
+ * @adapter: board private structure to initialize
+ *
+ * This function frees the memory allocated to the q_vectors. In addition if
+ * NAPI is enabled it will delete any references to the NAPI struct prior
+ * to freeing the q_vector.
+ **/
+static void txgbe_free_q_vectors(struct txgbe_adapter *adapter)
+{
+ int v_idx = adapter->num_q_vectors;
+
+ adapter->num_tx_queues = 0;
+ adapter->num_rx_queues = 0;
+ adapter->num_q_vectors = 0;
+
+ while (v_idx--)
+ txgbe_free_q_vector(adapter, v_idx);
+}
+
+void txgbe_reset_interrupt_capability(struct txgbe_adapter *adapter)
+{
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+ adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+ pci_disable_msix(adapter->pdev);
+ kfree(adapter->msix_entries);
+ adapter->msix_entries = NULL;
+ } else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED) {
+ adapter->flags &= ~TXGBE_FLAG_MSI_ENABLED;
+ pci_disable_msi(adapter->pdev);
+ }
+}
+
+/**
+ * txgbe_set_interrupt_capability - set MSI-X or MSI if supported
+ * @adapter: board private structure to initialize
+ *
+ * Attempt to configure the interrupts using the best available
+ * capabilities of the hardware and the kernel.
+ **/
+void txgbe_set_interrupt_capability(struct txgbe_adapter *adapter)
+{
+ int err;
+
+ /* We will try to get MSI-X interrupts first */
+ if (!txgbe_acquire_msix_vectors(adapter))
+ return;
+
+ /* At this point, we do not have MSI-X capabilities. We need to
+ * reconfigure or disable various features which require MSI-X
+ * capability.
+ */
+
+ /* Disable DCB unless we only have a single traffic class */
+ if (netdev_get_num_tc(adapter->netdev) > 1) {
+ e_dev_warn("Number of DCB TCs exceeds number of available "
+ "queues. Disabling DCB support.\n");
+ netdev_reset_tc(adapter->netdev);
+ }
+
+ /* Disable VMDq support */
+ e_dev_warn("Disabling VMQd support\n");
+ adapter->flags &= ~TXGBE_FLAG_VMDQ_ENABLED;
+
+ /* Disable RSS */
+ e_dev_warn("Disabling RSS support\n");
+ adapter->ring_feature[RING_F_RSS].limit = 1;
+
+ /* recalculate number of queues now that many features have been
+ * changed or disabled.
+ */
+ txgbe_set_num_queues(adapter);
+ adapter->num_q_vectors = 1;
+
+ if (!(adapter->flags & TXGBE_FLAG_MSI_CAPABLE))
+ return;
+
+ err = pci_enable_msi(adapter->pdev);
+ if (err)
+ e_dev_warn("Failed to allocate MSI interrupt, falling back to "
+ "legacy. Error: %d\n",
+ err);
+ else
+ adapter->flags |= TXGBE_FLAG_MSI_ENABLED;
+}
+
+/**
+ * txgbe_init_interrupt_scheme - Determine proper interrupt scheme
+ * @adapter: board private structure to initialize
+ *
+ * We determine which interrupt scheme to use based on...
+ * - Kernel support (MSI, MSI-X)
+ * - which can be user-defined (via MODULE_PARAM)
+ * - Hardware queue count (num_*_queues)
+ * - defined by miscellaneous hardware support/features (RSS, etc.)
+ **/
+int txgbe_init_interrupt_scheme(struct txgbe_adapter *adapter)
+{
+ int err;
+
+ /* Number of supported queues */
+ txgbe_set_num_queues(adapter);
+
+ /* Set interrupt mode */
+ txgbe_set_interrupt_capability(adapter);
+
+ /* Allocate memory for queues */
+ err = txgbe_alloc_q_vectors(adapter);
+ if (err) {
+ e_err(probe, "Unable to allocate memory for queue vectors\n");
+ txgbe_reset_interrupt_capability(adapter);
+ return err;
+ }
+
+ txgbe_cache_ring_register(adapter);
+
+ set_bit(__TXGBE_DOWN, &adapter->state);
+
+ return 0;
+}
+
+/**
+ * txgbe_clear_interrupt_scheme - Clear the current interrupt scheme settings
+ * @adapter: board private structure to clear interrupt scheme on
+ *
+ * We go through and clear interrupt specific resources and reset the structure
+ * to pre-load conditions
+ **/
+void txgbe_clear_interrupt_scheme(struct txgbe_adapter *adapter)
+{
+ txgbe_free_q_vectors(adapter);
+ txgbe_reset_interrupt_capability(adapter);
+}
+
+void txgbe_tx_ctxtdesc(struct txgbe_ring *tx_ring, u32 vlan_macip_lens,
+ u32 fcoe_sof_eof, u32 type_tucmd, u32 mss_l4len_idx)
+{
+ struct txgbe_tx_context_desc *context_desc;
+ u16 i = tx_ring->next_to_use;
+
+ context_desc = TXGBE_TX_CTXTDESC(tx_ring, i);
+
+ i++;
+ tx_ring->next_to_use = (i < tx_ring->count) ? i : 0;
+
+ /* set bits to identify this as an advanced context descriptor */
+ type_tucmd |= TXGBE_TXD_DTYP_CTXT;
+ context_desc->vlan_macip_lens = cpu_to_le32(vlan_macip_lens);
+ context_desc->seqnum_seed = cpu_to_le32(fcoe_sof_eof);
+ context_desc->type_tucmd_mlhl = cpu_to_le32(type_tucmd);
+ context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_main.c b/drivers/net/ethernet/netswift/txgbe/txgbe_main.c
new file mode 100644
index 0000000000000..a4d8cc260134b
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_main.c
@@ -0,0 +1,8045 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_main.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ *
+ * Copyright (c)2006 - 2007 Myricom, Inc. for some LRO specific code
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/string.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <linux/if_macvlan.h>
+#include <linux/ethtool.h>
+#include <linux/if_bridge.h>
+#include <net/vxlan.h>
+
+#include "txgbe.h"
+#include "txgbe_hw.h"
+#include "txgbe_phy.h"
+#include "txgbe_bp.h"
+
+char txgbe_driver_name[32] = TXGBE_NAME;
+static const char txgbe_driver_string[] =
+ "WangXun 10 Gigabit PCI Express Network Driver";
+
+#define DRV_HW_PERF
+
+#define FPGA
+
+#define DRIVERIOV
+
+#define BYPASS_TAG
+
+#define RELEASE_TAG
+
+#define DRV_VERSION __stringify(1.1.17oe)
+
+const char txgbe_driver_version[32] = DRV_VERSION;
+static const char txgbe_copyright[] =
+ "Copyright (c) 2015 -2017 Beijing WangXun Technology Co., Ltd";
+static const char txgbe_overheat_msg[] =
+ "Network adapter has been stopped because it has over heated. "
+ "If the problem persists, restart the computer, or "
+ "power off the system and replace the adapter";
+static const char txgbe_underheat_msg[] =
+ "Network adapter has been started again since the temperature "
+ "has been back to normal state";
+
+/* txgbe_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ * Class, Class Mask, private data (not used) }
+ */
+static const struct pci_device_id txgbe_pci_tbl[] = {
+ { PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_SP1000), 0},
+ { PCI_VDEVICE(TRUSTNETIC, TXGBE_DEV_ID_WX1820), 0},
+ /* required last entry */
+ { .device = 0 }
+};
+MODULE_DEVICE_TABLE(pci, txgbe_pci_tbl);
+
+MODULE_AUTHOR("Beijing WangXun Technology Co., Ltd, <linux.nic(a)trustnetic.com>");
+MODULE_DESCRIPTION("WangXun(R) 10 Gigabit PCI Express Network Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRV_VERSION);
+
+#define DEFAULT_DEBUG_LEVEL_SHIFT 3
+
+static struct workqueue_struct *txgbe_wq;
+
+static bool txgbe_is_sfp(struct txgbe_hw *hw);
+static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev);
+static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring);
+static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring);
+static void txgbe_napi_enable_all(struct txgbe_adapter *adapter);
+static void txgbe_napi_disable_all(struct txgbe_adapter *adapter);
+
+extern txgbe_dptype txgbe_ptype_lookup[256];
+
+static inline txgbe_dptype txgbe_decode_ptype(const u8 ptype)
+{
+ return txgbe_ptype_lookup[ptype];
+}
+
+static inline txgbe_dptype
+decode_rx_desc_ptype(const union txgbe_rx_desc *rx_desc)
+{
+ return txgbe_decode_ptype(TXGBE_RXD_PKTTYPE(rx_desc));
+}
+
+static void txgbe_check_minimum_link(struct txgbe_adapter *adapter,
+ int expected_gts)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct pci_dev *pdev;
+
+ /* Some devices are not connected over PCIe and thus do not negotiate
+ * speed. These devices do not have valid bus info, and thus any report
+ * we generate may not be correct.
+ */
+ if (hw->bus.type == txgbe_bus_type_internal)
+ return;
+
+ pdev = adapter->pdev;
+ pcie_print_link_status(pdev);
+}
+
+/**
+ * txgbe_enumerate_functions - Get the number of ports this device has
+ * @adapter: adapter structure
+ *
+ * This function enumerates the phsyical functions co-located on a single slot,
+ * in order to determine how many ports a device has. This is most useful in
+ * determining the required GT/s of PCIe bandwidth necessary for optimal
+ * performance.
+ **/
+static inline int txgbe_enumerate_functions(struct txgbe_adapter *adapter)
+{
+ struct pci_dev *entry, *pdev = adapter->pdev;
+ int physfns = 0;
+
+ list_for_each_entry(entry, &pdev->bus->devices, bus_list) {
+ /* When the devices on the bus don't all match our device ID,
+ * we can't reliably determine the correct number of
+ * functions. This can occur if a function has been direct
+ * attached to a virtual machine using VT-d, for example. In
+ * this case, simply return -1 to indicate this.
+ */
+ if ((entry->vendor != pdev->vendor) ||
+ (entry->device != pdev->device))
+ return -1;
+
+ physfns++;
+ }
+
+ return physfns;
+}
+
+void txgbe_service_event_schedule(struct txgbe_adapter *adapter)
+{
+ if (!test_bit(__TXGBE_DOWN, &adapter->state) &&
+ !test_bit(__TXGBE_REMOVING, &adapter->state) &&
+ !test_and_set_bit(__TXGBE_SERVICE_SCHED, &adapter->state))
+ queue_work(txgbe_wq, &adapter->service_task);
+}
+
+static void txgbe_service_event_complete(struct txgbe_adapter *adapter)
+{
+ BUG_ON(!test_bit(__TXGBE_SERVICE_SCHED, &adapter->state));
+
+ /* flush memory to make sure state is correct before next watchdog */
+ smp_mb__before_atomic();
+ clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
+}
+
+static void txgbe_remove_adapter(struct txgbe_hw *hw)
+{
+ struct txgbe_adapter *adapter = hw->back;
+
+ if (!hw->hw_addr)
+ return;
+ hw->hw_addr = NULL;
+ e_dev_err("Adapter removed\n");
+ if (test_bit(__TXGBE_SERVICE_INITED, &adapter->state))
+ txgbe_service_event_schedule(adapter);
+}
+
+static void txgbe_check_remove(struct txgbe_hw *hw, u32 reg)
+{
+ u32 value;
+
+ /* The following check not only optimizes a bit by not
+ * performing a read on the status register when the
+ * register just read was a status register read that
+ * returned TXGBE_FAILED_READ_REG. It also blocks any
+ * potential recursion.
+ */
+ if (reg == TXGBE_CFG_PORT_ST) {
+ txgbe_remove_adapter(hw);
+ return;
+ }
+ value = rd32(hw, TXGBE_CFG_PORT_ST);
+ if (value == TXGBE_FAILED_READ_REG)
+ txgbe_remove_adapter(hw);
+}
+
+static u32 txgbe_validate_register_read(struct txgbe_hw *hw, u32 reg, bool quiet)
+{
+ int i;
+ u32 value;
+ u8 __iomem *reg_addr;
+ struct txgbe_adapter *adapter = hw->back;
+
+ reg_addr = READ_ONCE(hw->hw_addr);
+ if (TXGBE_REMOVED(reg_addr))
+ return TXGBE_FAILED_READ_REG;
+ for (i = 0; i < TXGBE_DEAD_READ_RETRIES; ++i) {
+ value = txgbe_rd32(reg_addr + reg);
+ if (value != TXGBE_DEAD_READ_REG)
+ break;
+ }
+ if (quiet)
+ return value;
+ if (value == TXGBE_DEAD_READ_REG)
+ e_err(drv, "%s: register %x read unchanged\n", __func__, reg);
+ else
+ e_warn(hw, "%s: register %x read recovered after %d retries\n",
+ __func__, reg, i + 1);
+ return value;
+}
+
+/**
+ * txgbe_read_reg - Read from device register
+ * @hw: hw specific details
+ * @reg: offset of register to read
+ *
+ * Returns : value read or TXGBE_FAILED_READ_REG if removed
+ *
+ * This function is used to read device registers. It checks for device
+ * removal by confirming any read that returns all ones by checking the
+ * status register value for all ones. This function avoids reading from
+ * the hardware if a removal was previously detected in which case it
+ * returns TXGBE_FAILED_READ_REG (all ones).
+ */
+u32 txgbe_read_reg(struct txgbe_hw *hw, u32 reg, bool quiet)
+{
+ u32 value;
+ u8 __iomem *reg_addr;
+
+ reg_addr = READ_ONCE(hw->hw_addr);
+ if (TXGBE_REMOVED(reg_addr))
+ return TXGBE_FAILED_READ_REG;
+ value = txgbe_rd32(reg_addr + reg);
+ if (unlikely(value == TXGBE_FAILED_READ_REG))
+ txgbe_check_remove(hw, reg);
+ if (unlikely(value == TXGBE_DEAD_READ_REG))
+ value = txgbe_validate_register_read(hw, reg, quiet);
+ return value;
+}
+
+static void txgbe_release_hw_control(struct txgbe_adapter *adapter)
+{
+ /* Let firmware take over control of hw */
+ wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+ TXGBE_CFG_PORT_CTL_DRV_LOAD, 0);
+}
+
+static void txgbe_get_hw_control(struct txgbe_adapter *adapter)
+{
+ /* Let firmware know the driver has taken over */
+ wr32m(&adapter->hw, TXGBE_CFG_PORT_CTL,
+ TXGBE_CFG_PORT_CTL_DRV_LOAD, TXGBE_CFG_PORT_CTL_DRV_LOAD);
+}
+
+/**
+ * txgbe_set_ivar - set the IVAR registers, mapping interrupt causes to vectors
+ * @adapter: pointer to adapter struct
+ * @direction: 0 for Rx, 1 for Tx, -1 for other causes
+ * @queue: queue to map the corresponding interrupt to
+ * @msix_vector: the vector to map to the corresponding queue
+ *
+ **/
+static void txgbe_set_ivar(struct txgbe_adapter *adapter, s8 direction,
+ u16 queue, u16 msix_vector)
+{
+ u32 ivar, index;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (direction == -1) {
+ /* other causes */
+ msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL;
+ index = 0;
+ ivar = rd32(&adapter->hw, TXGBE_PX_MISC_IVAR);
+ ivar &= ~(0xFF << index);
+ ivar |= (msix_vector << index);
+ wr32(&adapter->hw, TXGBE_PX_MISC_IVAR, ivar);
+ } else {
+ /* tx or rx causes */
+ msix_vector |= TXGBE_PX_IVAR_ALLOC_VAL;
+ index = ((16 * (queue & 1)) + (8 * direction));
+ ivar = rd32(hw, TXGBE_PX_IVAR(queue >> 1));
+ ivar &= ~(0xFF << index);
+ ivar |= (msix_vector << index);
+ wr32(hw, TXGBE_PX_IVAR(queue >> 1), ivar);
+ }
+}
+
+void txgbe_unmap_and_free_tx_resource(struct txgbe_ring *ring,
+ struct txgbe_tx_buffer *tx_buffer)
+{
+ if (tx_buffer->skb) {
+ dev_kfree_skb_any(tx_buffer->skb);
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_single(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ } else if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ }
+ tx_buffer->next_to_watch = NULL;
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+ /* tx_buffer must be completely set up in the transmit path */
+}
+
+static void txgbe_update_xoff_received(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_hw_stats *hwstats = &adapter->stats;
+ u32 xoff[8] = {0};
+ int tc;
+ int i;
+
+ /* update stats for each tc, only valid with PFC enabled */
+ for (i = 0; i < MAX_TX_PACKET_BUFFERS; i++) {
+ u32 pxoffrxc;
+ wr32m(hw, TXGBE_MMC_CONTROL, TXGBE_MMC_CONTROL_UP, i<<16);
+ pxoffrxc = rd32(hw, TXGBE_MAC_PXOFFRXC);
+ hwstats->pxoffrxc[i] += pxoffrxc;
+ /* Get the TC for given UP */
+ tc = netdev_get_prio_tc_map(adapter->netdev, i);
+ xoff[tc] += pxoffrxc;
+ }
+
+ /* disarm tx queues that have received xoff frames */
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+
+ tc = tx_ring->dcb_tc;
+ if ((tc <= 7) && (xoff[tc]))
+ clear_bit(__TXGBE_HANG_CHECK_ARMED, &tx_ring->state);
+ }
+}
+
+static u64 txgbe_get_tx_completed(struct txgbe_ring *ring)
+{
+ return ring->stats.packets;
+}
+
+static u64 txgbe_get_tx_pending(struct txgbe_ring *ring)
+{
+ struct txgbe_adapter *adapter;
+ struct txgbe_hw *hw;
+ u32 head, tail;
+
+ if (ring->accel)
+ adapter = ring->accel->adapter;
+ else
+ adapter = ring->q_vector->adapter;
+
+ hw = &adapter->hw;
+ head = rd32(hw, TXGBE_PX_TR_RP(ring->reg_idx));
+ tail = rd32(hw, TXGBE_PX_TR_WP(ring->reg_idx));
+
+ return ((head <= tail) ? tail : tail + ring->count) - head;
+}
+
+static inline bool txgbe_check_tx_hang(struct txgbe_ring *tx_ring)
+{
+ u64 tx_done = txgbe_get_tx_completed(tx_ring);
+ u64 tx_done_old = tx_ring->tx_stats.tx_done_old;
+ u64 tx_pending = txgbe_get_tx_pending(tx_ring);
+
+ clear_check_for_tx_hang(tx_ring);
+
+ /*
+ * Check for a hung queue, but be thorough. This verifies
+ * that a transmit has been completed since the previous
+ * check AND there is at least one packet pending. The
+ * ARMED bit is set to indicate a potential hang. The
+ * bit is cleared if a pause frame is received to remove
+ * false hang detection due to PFC or 802.3x frames. By
+ * requiring this to fail twice we avoid races with
+ * pfc clearing the ARMED bit and conditions where we
+ * run the check_tx_hang logic with a transmit completion
+ * pending but without time to complete it yet.
+ */
+ if (tx_done_old == tx_done && tx_pending)
+ /* make sure it is true for two checks in a row */
+ return test_and_set_bit(__TXGBE_HANG_CHECK_ARMED,
+ &tx_ring->state);
+ /* update completed stats and continue */
+ tx_ring->tx_stats.tx_done_old = tx_done;
+ /* reset the countdown */
+ clear_bit(__TXGBE_HANG_CHECK_ARMED, &tx_ring->state);
+
+ return false;
+}
+
+/**
+ * txgbe_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ **/
+static void txgbe_tx_timeout(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ bool real_tx_hang = false;
+ int i;
+ u16 value = 0;
+ u32 value2 = 0, value3 = 0;
+ u32 head, tail;
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+ if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring))
+ real_tx_hang = true;
+ }
+
+ pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value);
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", value);
+
+ pci_read_config_word(adapter->pdev, PCI_COMMAND, &value);
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", value);
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ head = rd32(&adapter->hw, TXGBE_PX_TR_RP(adapter->tx_ring[i]->reg_idx));
+ tail = rd32(&adapter->hw, TXGBE_PX_TR_WP(adapter->tx_ring[i]->reg_idx));
+
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "tx ring %d next_to_use is %d, next_to_clean is %d\n",
+ i, adapter->tx_ring[i]->next_to_use, adapter->tx_ring[i]->next_to_clean);
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "tx ring %d hw rp is 0x%x, wp is 0x%x\n", i, head, tail);
+ }
+
+ value2 = rd32(&adapter->hw, TXGBE_PX_IMS(0));
+ value3 = rd32(&adapter->hw, TXGBE_PX_IMS(1));
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "PX_IMS0 value is 0x%08x, PX_IMS1 value is 0x%08x\n", value2, value3);
+
+ if (value2 || value3) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "clear interrupt mask.\n");
+ wr32(&adapter->hw, TXGBE_PX_ICS(0), value2);
+ wr32(&adapter->hw, TXGBE_PX_IMC(0), value2);
+ wr32(&adapter->hw, TXGBE_PX_ICS(1), value3);
+ wr32(&adapter->hw, TXGBE_PX_IMC(1), value3);
+ }
+
+ if (adapter->hw.bus.lan_id == 0) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "tx timeout. do pcie recovery.\n");
+ adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+ txgbe_service_event_schedule(adapter);
+ } else
+ wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1);
+}
+
+#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
+
+/**
+ * txgbe_clean_tx_irq - Reclaim resources after transmit completes
+ * @q_vector: structure containing interrupt and ring information
+ * @tx_ring: tx ring to clean
+ **/
+static bool txgbe_clean_tx_irq(struct txgbe_q_vector *q_vector,
+ struct txgbe_ring *tx_ring)
+{
+ struct txgbe_adapter *adapter = q_vector->adapter;
+ struct txgbe_tx_buffer *tx_buffer;
+ union txgbe_tx_desc *tx_desc;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int budget = q_vector->tx.work_limit;
+ unsigned int i = tx_ring->next_to_clean;
+
+ if (test_bit(__TXGBE_DOWN, &adapter->state))
+ return true;
+
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ tx_desc = TXGBE_TX_DESC(tx_ring, i);
+ i -= tx_ring->count;
+
+ do {
+ union txgbe_tx_desc *eop_desc = tx_buffer->next_to_watch;
+
+ /* if next_to_watch is not set then there is no work pending */
+ if (!eop_desc)
+ break;
+
+ /* prevent any other reads prior to eop_desc */
+ read_barrier_depends();
+
+ /* if DD is not set pending work has not been completed */
+ if (!(eop_desc->wb.status & cpu_to_le32(TXGBE_TXD_STAT_DD)))
+ break;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buffer->next_to_watch = NULL;
+
+ /* update the statistics for this packet */
+ total_bytes += tx_buffer->bytecount;
+ total_packets += tx_buffer->gso_segs;
+
+ /* free the skb */
+ dev_consume_skb_any(tx_buffer->skb);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+
+ /* clear tx_buffer data */
+ tx_buffer->skb = NULL;
+ dma_unmap_len_set(tx_buffer, len, 0);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer_info;
+ tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buffer, len)) {
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ }
+ }
+
+ /* move us one more past the eop_desc for start of next pkt */
+ tx_buffer++;
+ tx_desc++;
+ i++;
+ if (unlikely(!i)) {
+ i -= tx_ring->count;
+ tx_buffer = tx_ring->tx_buffer_info;
+ tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+ }
+
+ /* issue prefetch for next Tx descriptor */
+ prefetch(tx_desc);
+
+ /* update budget accounting */
+ budget--;
+ } while (likely(budget));
+
+ i += tx_ring->count;
+ tx_ring->next_to_clean = i;
+ u64_stats_update_begin(&tx_ring->syncp);
+ tx_ring->stats.bytes += total_bytes;
+ tx_ring->stats.packets += total_packets;
+ u64_stats_update_end(&tx_ring->syncp);
+ q_vector->tx.total_bytes += total_bytes;
+ q_vector->tx.total_packets += total_packets;
+
+ if (check_for_tx_hang(tx_ring) && txgbe_check_tx_hang(tx_ring)) {
+ /* schedule immediate reset if we believe we hung */
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 value = 0;
+
+ e_err(drv, "Detected Tx Unit Hang\n"
+ " Tx Queue <%d>\n"
+ " TDH, TDT <%x>, <%x>\n"
+ " next_to_use <%x>\n"
+ " next_to_clean <%x>\n"
+ "tx_buffer_info[next_to_clean]\n"
+ " time_stamp <%lx>\n"
+ " jiffies <%lx>\n",
+ tx_ring->queue_index,
+ rd32(hw, TXGBE_PX_TR_RP(tx_ring->reg_idx)),
+ rd32(hw, TXGBE_PX_TR_WP(tx_ring->reg_idx)),
+ tx_ring->next_to_use, i,
+ tx_ring->tx_buffer_info[i].time_stamp, jiffies);
+
+ pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &value);
+ if (value == TXGBE_FAILED_READ_CFG_WORD) {
+ e_info(hw, "pcie link has been lost.\n");
+ }
+
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ e_info(probe,
+ "tx hang %d detected on queue %d, resetting adapter\n",
+ adapter->tx_timeout_count + 1, tx_ring->queue_index);
+
+ /* schedule immediate reset if we believe we hung */
+ e_info(hw, "real tx hang. do pcie recovery.\n");
+ adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+ txgbe_service_event_schedule(adapter);
+
+ /* the adapter is about to reset, no point in enabling stuff */
+ return true;
+ }
+
+ netdev_tx_completed_queue(txring_txq(tx_ring),
+ total_packets, total_bytes);
+
+ if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+ (txgbe_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {
+ /* Make sure that anybody stopping the queue after this
+ * sees the new next_to_clean.
+ */
+ smp_mb();
+
+ if (__netif_subqueue_stopped(tx_ring->netdev,
+ tx_ring->queue_index)
+ && !test_bit(__TXGBE_DOWN, &adapter->state)) {
+ netif_wake_subqueue(tx_ring->netdev,
+ tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ }
+ }
+
+ return !!budget;
+}
+
+#define TXGBE_RSS_L4_TYPES_MASK \
+ ((1ul << TXGBE_RXD_RSSTYPE_IPV4_TCP) | \
+ (1ul << TXGBE_RXD_RSSTYPE_IPV4_UDP) | \
+ (1ul << TXGBE_RXD_RSSTYPE_IPV4_SCTP) | \
+ (1ul << TXGBE_RXD_RSSTYPE_IPV6_TCP) | \
+ (1ul << TXGBE_RXD_RSSTYPE_IPV6_UDP) | \
+ (1ul << TXGBE_RXD_RSSTYPE_IPV6_SCTP))
+
+static inline void txgbe_rx_hash(struct txgbe_ring *ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u16 rss_type;
+
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
+ return;
+
+ rss_type = le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) &
+ TXGBE_RXD_RSSTYPE_MASK;
+
+ if (!rss_type)
+ return;
+
+ skb_set_hash(skb, le32_to_cpu(rx_desc->wb.lower.hi_dword.rss),
+ (TXGBE_RSS_L4_TYPES_MASK & (1ul << rss_type)) ?
+ PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
+}
+
+/**
+ * txgbe_rx_checksum - indicate in skb if hw indicated a good cksum
+ * @ring: structure containing ring specific data
+ * @rx_desc: current Rx descriptor being processed
+ * @skb: skb currently being received and modified
+ **/
+static inline void txgbe_rx_checksum(struct txgbe_ring *ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ txgbe_dptype dptype = decode_rx_desc_ptype(rx_desc);
+
+ skb->ip_summed = CHECKSUM_NONE;
+
+ skb_checksum_none_assert(skb);
+
+ /* Rx csum disabled */
+ if (!(ring->netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ /* if IPv4 header checksum error */
+ if ((txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_IPCS) &&
+ txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_IPE)) ||
+ (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_OUTERIPCS) &&
+ txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_OUTERIPER))) {
+ ring->rx_stats.csum_err++;
+ return;
+ }
+
+ /* L4 checksum offload flag must set for the below code to work */
+ if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_L4CS))
+ return;
+
+ /*likely incorrect csum if IPv6 Dest Header found */
+ if (dptype.prot != TXGBE_DEC_PTYPE_PROT_SCTP && TXGBE_RXD_IPV6EX(rx_desc))
+ return;
+
+ /* if L4 checksum error */
+ if (txgbe_test_staterr(rx_desc, TXGBE_RXD_ERR_TCPE)) {
+ ring->rx_stats.csum_err++;
+ return;
+ }
+ /* If there is an outer header present that might contain a checksum
+ * we need to bump the checksum level by 1 to reflect the fact that
+ * we are indicating we validated the inner checksum.
+ */
+ if (dptype.etype >= TXGBE_DEC_PTYPE_ETYPE_IG) {
+ skb->csum_level = 1;
+ /* FIXME :does skb->csum_level skb->encapsulation can both set ? */
+ skb->encapsulation = 1;
+ }
+
+ /* It must be a TCP or UDP or SCTP packet with a valid checksum */
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ ring->rx_stats.csum_good_cnt++;
+}
+
+static bool txgbe_alloc_mapped_skb(struct txgbe_ring *rx_ring,
+ struct txgbe_rx_buffer *bi)
+{
+ struct sk_buff *skb = bi->skb;
+ dma_addr_t dma = bi->dma;
+
+ if (unlikely(dma))
+ return true;
+
+ if (likely(!skb)) {
+ skb = netdev_alloc_skb_ip_align(rx_ring->netdev,
+ rx_ring->rx_buf_len);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_rx_buff_failed++;
+ return false;
+ }
+
+ bi->skb = skb;
+
+ }
+
+ dma = dma_map_single(rx_ring->dev, skb->data,
+ rx_ring->rx_buf_len, DMA_FROM_DEVICE);
+
+ /*
+ * if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ dev_kfree_skb_any(skb);
+ bi->skb = NULL;
+
+ rx_ring->rx_stats.alloc_rx_buff_failed++;
+ return false;
+ }
+
+ bi->dma = dma;
+ return true;
+}
+
+static bool txgbe_alloc_mapped_page(struct txgbe_ring *rx_ring,
+ struct txgbe_rx_buffer *bi)
+{
+ struct page *page = bi->page;
+ dma_addr_t dma;
+
+ /* since we are recycling buffers we should seldom need to alloc */
+ if (likely(page))
+ return true;
+
+ /* alloc new page for storage */
+ page = dev_alloc_pages(txgbe_rx_pg_order(rx_ring));
+ if (unlikely(!page)) {
+ rx_ring->rx_stats.alloc_rx_page_failed++;
+ return false;
+ }
+
+ /* map page for use */
+ dma = dma_map_page(rx_ring->dev, page, 0,
+ txgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE);
+
+ /*
+ * if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ __free_pages(page, txgbe_rx_pg_order(rx_ring));
+
+ rx_ring->rx_stats.alloc_rx_page_failed++;
+ return false;
+ }
+
+ bi->page_dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
+
+ return true;
+}
+
+/**
+ * txgbe_alloc_rx_buffers - Replace used receive buffers
+ * @rx_ring: ring to place buffers on
+ * @cleaned_count: number of buffers to replace
+ **/
+void txgbe_alloc_rx_buffers(struct txgbe_ring *rx_ring, u16 cleaned_count)
+{
+ union txgbe_rx_desc *rx_desc;
+ struct txgbe_rx_buffer *bi;
+ u16 i = rx_ring->next_to_use;
+
+ /* nothing to do */
+ if (!cleaned_count)
+ return;
+
+ rx_desc = TXGBE_RX_DESC(rx_ring, i);
+ bi = &rx_ring->rx_buffer_info[i];
+ i -= rx_ring->count;
+
+ do {
+ if (ring_is_hs_enabled(rx_ring)) {
+ if (!txgbe_alloc_mapped_skb(rx_ring, bi))
+ break;
+ rx_desc->read.hdr_addr = cpu_to_le64(bi->dma);
+ }
+
+ if (!txgbe_alloc_mapped_page(rx_ring, bi))
+ break;
+ rx_desc->read.pkt_addr =
+ cpu_to_le64(bi->page_dma + bi->page_offset);
+
+ rx_desc++;
+ bi++;
+ i++;
+ if (unlikely(!i)) {
+ rx_desc = TXGBE_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_buffer_info;
+ i -= rx_ring->count;
+ }
+
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->wb.upper.status_error = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ i += rx_ring->count;
+
+ if (rx_ring->next_to_use != i) {
+ rx_ring->next_to_use = i;
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = i;
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64).
+ */
+ wmb();
+ writel(i, rx_ring->tail);
+ }
+}
+
+static inline u16 txgbe_get_hlen(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc)
+{
+ __le16 hdr_info = rx_desc->wb.lower.lo_dword.hs_rss.hdr_info;
+ u16 hlen = le16_to_cpu(hdr_info) & TXGBE_RXD_HDRBUFLEN_MASK;
+
+ UNREFERENCED_PARAMETER(rx_ring);
+
+ if (hlen > (TXGBE_RX_HDR_SIZE << TXGBE_RXD_HDRBUFLEN_SHIFT))
+ hlen = 0;
+ else
+ hlen >>= TXGBE_RXD_HDRBUFLEN_SHIFT;
+
+ return hlen;
+}
+
+static void txgbe_set_rsc_gso_size(struct txgbe_ring __maybe_unused *ring,
+ struct sk_buff *skb)
+{
+ u16 hdr_len = eth_get_headlen(skb->data, skb_headlen(skb));
+
+ /* set gso_size to avoid messing up TCP MSS */
+ skb_shinfo(skb)->gso_size = DIV_ROUND_UP((skb->len - hdr_len),
+ TXGBE_CB(skb)->append_cnt);
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+}
+
+static void txgbe_update_rsc_stats(struct txgbe_ring *rx_ring,
+ struct sk_buff *skb)
+{
+ /* if append_cnt is 0 then frame is not RSC */
+ if (!TXGBE_CB(skb)->append_cnt)
+ return;
+
+ rx_ring->rx_stats.rsc_count += TXGBE_CB(skb)->append_cnt;
+ rx_ring->rx_stats.rsc_flush++;
+
+ txgbe_set_rsc_gso_size(rx_ring, skb);
+
+ /* gso_size is computed using append_cnt so always clear it last */
+ TXGBE_CB(skb)->append_cnt = 0;
+}
+
+static void txgbe_rx_vlan(struct txgbe_ring *ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u8 idx = 0;
+ u16 ethertype;
+
+ if ((ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_VP)) {
+ idx = (le16_to_cpu(rx_desc->wb.lower.lo_dword.hs_rss.pkt_info) &
+ TXGBE_RXD_TPID_MASK) >> TXGBE_RXD_TPID_SHIFT;
+ ethertype = ring->q_vector->adapter->hw.tpid[idx];
+ __vlan_hwaccel_put_tag(skb,
+ htons(ethertype),
+ le16_to_cpu(rx_desc->wb.upper.vlan));
+ }
+}
+
+/**
+ * txgbe_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, timestamp, protocol, and
+ * other fields within the skb.
+ **/
+static void txgbe_process_skb_fields(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u32 flags = rx_ring->q_vector->adapter->flags;
+
+ txgbe_update_rsc_stats(rx_ring, skb);
+ txgbe_rx_hash(rx_ring, rx_desc, skb);
+ txgbe_rx_checksum(rx_ring, rx_desc, skb);
+
+ if (unlikely(flags & TXGBE_FLAG_RX_HWTSTAMP_ENABLED) &&
+ unlikely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_TS))) {
+ txgbe_ptp_rx_hwtstamp(rx_ring->q_vector->adapter, skb);
+ rx_ring->last_rx_timestamp = jiffies;
+ }
+
+ txgbe_rx_vlan(rx_ring, rx_desc, skb);
+
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+
+ skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+}
+
+static void txgbe_rx_skb(struct txgbe_q_vector *q_vector,
+ struct sk_buff *skb)
+{
+ napi_gro_receive(&q_vector->napi, skb);
+}
+
+/**
+ * txgbe_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool txgbe_is_non_eop(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct txgbe_rx_buffer *rx_buffer =
+ &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(TXGBE_RX_DESC(rx_ring, ntc));
+
+ /* update RSC append count if present */
+ if (ring_is_rsc_enabled(rx_ring)) {
+ __le32 rsc_enabled = rx_desc->wb.lower.lo_dword.data &
+ cpu_to_le32(TXGBE_RXD_RSCCNT_MASK);
+
+ if (unlikely(rsc_enabled)) {
+ u32 rsc_cnt = le32_to_cpu(rsc_enabled);
+
+ rsc_cnt >>= TXGBE_RXD_RSCCNT_SHIFT;
+ TXGBE_CB(skb)->append_cnt += rsc_cnt - 1;
+
+ /* update ntc based on RSC value */
+ ntc = le32_to_cpu(rx_desc->wb.upper.status_error);
+ ntc &= TXGBE_RXD_NEXTP_MASK;
+ ntc >>= TXGBE_RXD_NEXTP_SHIFT;
+ }
+ }
+
+ /* if we are the last buffer then there is nothing else to do */
+ if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)))
+ return false;
+
+ /* place skb in next buffer to be received */
+ if (ring_is_hs_enabled(rx_ring)) {
+ rx_buffer->skb = rx_ring->rx_buffer_info[ntc].skb;
+ rx_buffer->dma = rx_ring->rx_buffer_info[ntc].dma;
+ rx_ring->rx_buffer_info[ntc].dma = 0;
+ }
+ rx_ring->rx_buffer_info[ntc].skb = skb;
+ rx_ring->rx_stats.non_eop_descs++;
+
+ return true;
+}
+
+/**
+ * txgbe_pull_tail - txgbe specific version of skb_pull_tail
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an txgbe specific version of __pskb_pull_tail. The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void txgbe_pull_tail(struct sk_buff *skb)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va;
+ unsigned int pull_len;
+
+ /*
+ * it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+ /*
+ * we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(va, TXGBE_RX_HDR_SIZE);
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ frag->page_offset += pull_len;
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+/**
+ * txgbe_dma_sync_frag - perform DMA sync for first frag of SKB
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being updated
+ *
+ * This function provides a basic DMA sync up for the first fragment of an
+ * skb. The reason for doing this is that the first fragment cannot be
+ * unmapped until we have reached the end of packet descriptor for a buffer
+ * chain.
+ */
+static void txgbe_dma_sync_frag(struct txgbe_ring *rx_ring,
+ struct sk_buff *skb)
+{
+ if (ring_uses_build_skb(rx_ring)) {
+ unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ TXGBE_CB(skb)->dma,
+ offset,
+ skb_headlen(skb),
+ DMA_FROM_DEVICE);
+ } else {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ TXGBE_CB(skb)->dma,
+ frag->page_offset,
+ skb_frag_size(frag),
+ DMA_FROM_DEVICE);
+ }
+
+ /* If the page was released, just unmap it. */
+ if (unlikely(TXGBE_CB(skb)->page_released)) {
+ dma_unmap_page_attrs(rx_ring->dev, TXGBE_CB(skb)->dma,
+ txgbe_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE,
+ TXGBE_RX_DMA_ATTR);
+ }
+}
+
+/**
+ * txgbe_cleanup_headers - Correct corrupted or empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being fixed
+ *
+ * Check for corrupted packet headers caused by senders on the local L2
+ * embedded NIC switch not setting up their Tx Descriptors right. These
+ * should be very rare.
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool txgbe_cleanup_headers(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = rx_ring->netdev;
+
+ /* verify that the packet does not have any known errors */
+ if (unlikely(txgbe_test_staterr(rx_desc,
+ TXGBE_RXD_ERR_FRAME_ERR_MASK) &&
+ !(netdev->features & NETIF_F_RXALL))) {
+ dev_kfree_skb_any(skb);
+ return true;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb) && !skb_headlen(skb))
+ txgbe_pull_tail(skb);
+
+ /* if eth_skb_pad returns an error the skb was freed */
+ if (eth_skb_pad(skb))
+ return true;
+
+ return false;
+}
+
+/**
+ * txgbe_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void txgbe_reuse_rx_page(struct txgbe_ring *rx_ring,
+ struct txgbe_rx_buffer *old_buff)
+{
+ struct txgbe_rx_buffer *new_buff;
+ u16 nta = rx_ring->next_to_alloc;
+
+ new_buff = &rx_ring->rx_buffer_info[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
+
+ /* transfer page from old buffer to new buffer */
+ new_buff->page_dma = old_buff->page_dma;
+ new_buff->page = old_buff->page;
+ new_buff->page_offset = old_buff->page_offset;
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rx_ring->dev, new_buff->page_dma,
+ new_buff->page_offset,
+ txgbe_rx_bufsz(rx_ring),
+ DMA_FROM_DEVICE);
+}
+
+static inline bool txgbe_page_is_reserved(struct page *page)
+{
+ return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+}
+
+/**
+ * txgbe_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool txgbe_add_rx_frag(struct txgbe_ring *rx_ring,
+ struct txgbe_rx_buffer *rx_buffer,
+ union txgbe_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct page *page = rx_buffer->page;
+ unsigned int size = le16_to_cpu(rx_desc->wb.upper.length);
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = txgbe_rx_bufsz(rx_ring);
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+ unsigned int last_offset = txgbe_rx_pg_size(rx_ring) -
+ txgbe_rx_bufsz(rx_ring);
+#endif
+
+ if ((size <= TXGBE_RX_HDR_SIZE) && !skb_is_nonlinear(skb) &&
+ !ring_is_hs_enabled(rx_ring)) {
+ unsigned char *va = page_address(page) + rx_buffer->page_offset;
+
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
+
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(!txgbe_page_is_reserved(page)))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ __free_pages(page, txgbe_rx_pg_order(rx_ring));
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ rx_buffer->page_offset, size, truesize);
+
+ /* avoid re-using remote pages */
+ if (unlikely(txgbe_page_is_reserved(page)))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= truesize;
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+
+ if (rx_buffer->page_offset > last_offset)
+ return false;
+#endif
+
+ /* Even if we own the page, we are not allowed to use atomic_set()
+ * This would break get_page_unless_zero() users.
+ */
+ page_ref_inc(page);
+
+ return true;
+}
+
+static struct sk_buff *txgbe_fetch_rx_buffer(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc)
+{
+ struct txgbe_rx_buffer *rx_buffer;
+ struct sk_buff *skb;
+ struct page *page;
+
+ rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ skb = rx_buffer->skb;
+
+ if (likely(!skb)) {
+ void *page_addr = page_address(page) +
+ rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = netdev_alloc_skb_ip_align(rx_ring->netdev,
+ TXGBE_RX_HDR_SIZE);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_rx_buff_failed++;
+ return NULL;
+ }
+
+ /*
+ * we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+
+ /*
+ * Delay unmapping of the first packet. It carries the
+ * header information, HW may still access the header
+ * after the writeback. Only unmap it when EOP is
+ * reached
+ */
+ if (likely(txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)))
+ goto dma_sync;
+
+ TXGBE_CB(skb)->dma = rx_buffer->page_dma;
+ } else {
+ if (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP))
+ txgbe_dma_sync_frag(rx_ring, skb);
+
+dma_sync:
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->page_dma,
+ rx_buffer->page_offset,
+ txgbe_rx_bufsz(rx_ring),
+ DMA_FROM_DEVICE);
+
+ rx_buffer->skb = NULL;
+ }
+
+ /* pull page into skb */
+ if (txgbe_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ txgbe_reuse_rx_page(rx_ring, rx_buffer);
+ } else if (TXGBE_CB(skb)->dma == rx_buffer->page_dma) {
+ /* the page has been released from the ring */
+ TXGBE_CB(skb)->page_released = true;
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->page_dma,
+ txgbe_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of buffer_info */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+static struct sk_buff *txgbe_fetch_rx_buffer_hs(struct txgbe_ring *rx_ring,
+ union txgbe_rx_desc *rx_desc)
+{
+ struct txgbe_rx_buffer *rx_buffer;
+ struct sk_buff *skb;
+ struct page *page;
+ int hdr_len = 0;
+
+ rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ skb = rx_buffer->skb;
+ rx_buffer->skb = NULL;
+ prefetchw(skb->data);
+
+ if (!skb_is_nonlinear(skb)) {
+ hdr_len = txgbe_get_hlen(rx_ring, rx_desc);
+ if (hdr_len > 0) {
+ __skb_put(skb, hdr_len);
+ TXGBE_CB(skb)->dma_released = true;
+ TXGBE_CB(skb)->dma = rx_buffer->dma;
+ rx_buffer->dma = 0;
+ } else {
+ dma_unmap_single(rx_ring->dev,
+ rx_buffer->dma,
+ rx_ring->rx_buf_len,
+ DMA_FROM_DEVICE);
+ rx_buffer->dma = 0;
+ if (likely(txgbe_test_staterr(rx_desc,
+ TXGBE_RXD_STAT_EOP)))
+ goto dma_sync;
+ TXGBE_CB(skb)->dma = rx_buffer->page_dma;
+ goto add_frag;
+ }
+ }
+
+ if (txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_EOP)) {
+ if (skb_headlen(skb)) {
+ if (TXGBE_CB(skb)->dma_released == true) {
+ dma_unmap_single(rx_ring->dev,
+ TXGBE_CB(skb)->dma,
+ rx_ring->rx_buf_len,
+ DMA_FROM_DEVICE);
+ TXGBE_CB(skb)->dma = 0;
+ TXGBE_CB(skb)->dma_released = false;
+ }
+ } else
+ txgbe_dma_sync_frag(rx_ring, skb);
+ }
+
+dma_sync:
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->page_dma,
+ rx_buffer->page_offset,
+ txgbe_rx_bufsz(rx_ring),
+ DMA_FROM_DEVICE);
+add_frag:
+ /* pull page into skb */
+ if (txgbe_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ txgbe_reuse_rx_page(rx_ring, rx_buffer);
+ } else if (TXGBE_CB(skb)->dma == rx_buffer->page_dma) {
+ /* the page has been released from the ring */
+ TXGBE_CB(skb)->page_released = true;
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->page_dma,
+ txgbe_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of buffer_info */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+/**
+ * txgbe_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @q_vector: structure containing interrupt and ring information
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the syste.
+ *
+ * Returns amount of work completed.
+ **/
+static int txgbe_clean_rx_irq(struct txgbe_q_vector *q_vector,
+ struct txgbe_ring *rx_ring,
+ int budget)
+{
+ unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+ u16 cleaned_count = txgbe_desc_unused(rx_ring);
+
+ do {
+ union txgbe_rx_desc *rx_desc;
+ struct sk_buff *skb;
+
+ /* return some buffers to hardware, one at a time is too slow */
+ if (cleaned_count >= TXGBE_RX_BUFFER_WRITE) {
+ txgbe_alloc_rx_buffers(rx_ring, cleaned_count);
+ cleaned_count = 0;
+ }
+
+ rx_desc = TXGBE_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
+ if (!txgbe_test_staterr(rx_desc, TXGBE_RXD_STAT_DD))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc until we know the
+ * descriptor has been written back
+ */
+ dma_rmb();
+
+ /* retrieve a buffer from the ring */
+ if (ring_is_hs_enabled(rx_ring))
+ skb = txgbe_fetch_rx_buffer_hs(rx_ring, rx_desc);
+ else
+ skb = txgbe_fetch_rx_buffer(rx_ring, rx_desc);
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb)
+ break;
+
+ cleaned_count++;
+
+ /* place incomplete frames back on ring for completion */
+ if (txgbe_is_non_eop(rx_ring, rx_desc, skb))
+ continue;
+
+ /* verify the packet layout is correct */
+ if (txgbe_cleanup_headers(rx_ring, rx_desc, skb))
+ continue;
+
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
+
+ /* populate checksum, timestamp, VLAN, and protocol */
+ txgbe_process_skb_fields(rx_ring, rx_desc, skb);
+
+ txgbe_rx_skb(q_vector, skb);
+
+ /* update budget accounting */
+ total_rx_packets++;
+ } while (likely(total_rx_packets < budget));
+
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->stats.packets += total_rx_packets;
+ rx_ring->stats.bytes += total_rx_bytes;
+ u64_stats_update_end(&rx_ring->syncp);
+ q_vector->rx.total_packets += total_rx_packets;
+ q_vector->rx.total_bytes += total_rx_bytes;
+
+ return total_rx_packets;
+}
+
+/**
+ * txgbe_configure_msix - Configure MSI-X hardware
+ * @adapter: board private structure
+ *
+ * txgbe_configure_msix sets up the hardware to properly generate MSI-X
+ * interrupts.
+ **/
+static void txgbe_configure_msix(struct txgbe_adapter *adapter)
+{
+ u16 v_idx;
+
+ /* Populate MSIX to EITR Select */
+ if (adapter->num_vfs >= 32) {
+ u32 eitrsel = (1 << (adapter->num_vfs - 32)) - 1;
+ wr32(&adapter->hw, TXGBE_PX_ITRSEL, eitrsel);
+ } else {
+ wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0);
+ }
+
+ /*
+ * Populate the IVAR table and set the ITR values to the
+ * corresponding register.
+ */
+ for (v_idx = 0; v_idx < adapter->num_q_vectors; v_idx++) {
+ struct txgbe_q_vector *q_vector = adapter->q_vector[v_idx];
+ struct txgbe_ring *ring;
+
+ txgbe_for_each_ring(ring, q_vector->rx)
+ txgbe_set_ivar(adapter, 0, ring->reg_idx, v_idx);
+
+ txgbe_for_each_ring(ring, q_vector->tx)
+ txgbe_set_ivar(adapter, 1, ring->reg_idx, v_idx);
+
+ txgbe_write_eitr(q_vector);
+ }
+
+ txgbe_set_ivar(adapter, -1, 0, v_idx);
+
+ wr32(&adapter->hw, TXGBE_PX_ITR(v_idx), 1950);
+}
+
+enum latency_range {
+ lowest_latency = 0,
+ low_latency = 1,
+ bulk_latency = 2,
+ latency_invalid = 255
+};
+
+/**
+ * txgbe_update_itr - update the dynamic ITR value based on statistics
+ * @q_vector: structure containing interrupt and ring information
+ * @ring_container: structure containing ring performance data
+ *
+ * Stores a new ITR value based on packets and byte
+ * counts during the last interrupt. The advantage of per interrupt
+ * computation is faster updates and more accurate ITR for the current
+ * traffic pattern. Constants in this function were computed
+ * based on theoretical maximum wire speed and thresholds were set based
+ * on testing data as well as attempting to minimize response time
+ * while increasing bulk throughput.
+ * this functionality is controlled by the InterruptThrottleRate module
+ * parameter (see txgbe_param.c)
+ **/
+static void txgbe_update_itr(struct txgbe_q_vector *q_vector,
+ struct txgbe_ring_container *ring_container)
+{
+ int bytes = ring_container->total_bytes;
+ int packets = ring_container->total_packets;
+ u32 timepassed_us;
+ u64 bytes_perint;
+ u8 itr_setting = ring_container->itr;
+
+ if (packets == 0)
+ return;
+
+ /* simple throttlerate management
+ * 0-10MB/s lowest (100000 ints/s)
+ * 10-20MB/s low (20000 ints/s)
+ * 20-1249MB/s bulk (12000 ints/s)
+ */
+ /* what was last interrupt timeslice? */
+ timepassed_us = q_vector->itr >> 2;
+ if (timepassed_us == 0)
+ return;
+ bytes_perint = bytes / timepassed_us; /* bytes/usec */
+
+ switch (itr_setting) {
+ case lowest_latency:
+ if (bytes_perint > 10) {
+ itr_setting = low_latency;
+ }
+ break;
+ case low_latency:
+ if (bytes_perint > 20) {
+ itr_setting = bulk_latency;
+ } else if (bytes_perint <= 10) {
+ itr_setting = lowest_latency;
+ }
+ break;
+ case bulk_latency:
+ if (bytes_perint <= 20) {
+ itr_setting = low_latency;
+ }
+ break;
+ }
+
+ /* clear work counters since we have the values we need */
+ ring_container->total_bytes = 0;
+ ring_container->total_packets = 0;
+
+ /* write updated itr to ring container */
+ ring_container->itr = itr_setting;
+}
+
+/**
+ * txgbe_write_eitr - write EITR register in hardware specific way
+ * @q_vector: structure containing interrupt and ring information
+ *
+ * This function is made to be called by ethtool and by the driver
+ * when it needs to update EITR registers at runtime. Hardware
+ * specific quirks/differences are taken care of here.
+ */
+void txgbe_write_eitr(struct txgbe_q_vector *q_vector)
+{
+ struct txgbe_adapter *adapter = q_vector->adapter;
+ struct txgbe_hw *hw = &adapter->hw;
+ int v_idx = q_vector->v_idx;
+ u32 itr_reg = q_vector->itr & TXGBE_MAX_EITR;
+
+ itr_reg |= TXGBE_PX_ITR_CNT_WDIS;
+
+ wr32(hw, TXGBE_PX_ITR(v_idx), itr_reg);
+}
+
+static void txgbe_set_itr(struct txgbe_q_vector *q_vector)
+{
+ u16 new_itr = q_vector->itr;
+ u8 current_itr;
+
+ txgbe_update_itr(q_vector, &q_vector->tx);
+ txgbe_update_itr(q_vector, &q_vector->rx);
+
+ current_itr = max(q_vector->rx.itr, q_vector->tx.itr);
+
+ switch (current_itr) {
+ /* counts and packets in update_itr are dependent on these numbers */
+ case lowest_latency:
+ new_itr = TXGBE_100K_ITR;
+ break;
+ case low_latency:
+ new_itr = TXGBE_20K_ITR;
+ break;
+ case bulk_latency:
+ new_itr = TXGBE_12K_ITR;
+ break;
+ default:
+ break;
+ }
+
+ if (new_itr != q_vector->itr) {
+ /* do an exponential smoothing */
+ new_itr = (10 * new_itr * q_vector->itr) /
+ ((9 * new_itr) + q_vector->itr);
+
+ /* save the algorithm value here */
+ q_vector->itr = new_itr;
+
+ txgbe_write_eitr(q_vector);
+ }
+}
+
+/**
+ * txgbe_check_overtemp_subtask - check for over temperature
+ * @adapter: pointer to adapter
+ **/
+static void txgbe_check_overtemp_subtask(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 eicr = adapter->interrupt_event;
+ s32 temp_state;
+
+ if (test_bit(__TXGBE_DOWN, &adapter->state))
+ return;
+ if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE))
+ return;
+ if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_EVENT))
+ return;
+
+ adapter->flags2 &= ~TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+
+ /*
+ * Since the warning interrupt is for both ports
+ * we don't have to check if:
+ * - This interrupt wasn't for our port.
+ * - We may have missed the interrupt so always have to
+ * check if we got a LSC
+ */
+ if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+ return;
+
+ temp_state = TCALL(hw, phy.ops.check_overtemp);
+ if (!temp_state || temp_state == TXGBE_NOT_IMPLEMENTED)
+ return;
+
+ if (temp_state == TXGBE_ERR_UNDERTEMP &&
+ test_bit(__TXGBE_HANGING, &adapter->state)) {
+ e_crit(drv, "%s\n", txgbe_underheat_msg);
+ wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_RXEN, TXGBE_RDB_PB_CTL_RXEN);
+ netif_carrier_on(adapter->netdev);
+
+ clear_bit(__TXGBE_HANGING, &adapter->state);
+ } else if (temp_state == TXGBE_ERR_OVERTEMP &&
+ !test_and_set_bit(__TXGBE_HANGING, &adapter->state)) {
+ e_crit(drv, "%s\n", txgbe_overheat_msg);
+ netif_carrier_off(adapter->netdev);
+
+ wr32m(&adapter->hw, TXGBE_RDB_PB_CTL,
+ TXGBE_RDB_PB_CTL_RXEN, 0);
+ }
+
+ adapter->interrupt_event = 0;
+}
+
+static void txgbe_check_overtemp_event(struct txgbe_adapter *adapter, u32 eicr)
+{
+ if (!(adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE))
+ return;
+
+ if (!(eicr & TXGBE_PX_MISC_IC_OVER_HEAT))
+ return;
+
+ if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+ adapter->interrupt_event = eicr;
+ adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_EVENT;
+ txgbe_service_event_schedule(adapter);
+ }
+}
+
+static void txgbe_check_sfp_event(struct txgbe_adapter *adapter, u32 eicr)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 eicr_mask = TXGBE_PX_MISC_IC_GPIO;
+ u32 reg;
+
+ if (eicr & eicr_mask) {
+ if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+ wr32(hw, TXGBE_GPIO_INTMASK, 0xFF);
+ reg = rd32(hw, TXGBE_GPIO_INTSTATUS);
+ if (reg & TXGBE_GPIO_INTSTATUS_2) {
+ adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+ wr32(hw, TXGBE_GPIO_EOI,
+ TXGBE_GPIO_EOI_2);
+ adapter->sfp_poll_time = 0;
+ txgbe_service_event_schedule(adapter);
+ }
+ if (reg & TXGBE_GPIO_INTSTATUS_3) {
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+ wr32(hw, TXGBE_GPIO_EOI,
+ TXGBE_GPIO_EOI_3);
+ txgbe_service_event_schedule(adapter);
+ }
+
+ if (reg & TXGBE_GPIO_INTSTATUS_6) {
+ wr32(hw, TXGBE_GPIO_EOI,
+ TXGBE_GPIO_EOI_6);
+ adapter->flags |=
+ TXGBE_FLAG_NEED_LINK_CONFIG;
+ txgbe_service_event_schedule(adapter);
+ }
+ wr32(hw, TXGBE_GPIO_INTMASK, 0x0);
+ }
+ }
+}
+
+static void txgbe_check_lsc(struct txgbe_adapter *adapter)
+{
+ adapter->lsc_int++;
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+ adapter->link_check_timeout = jiffies;
+ if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+ txgbe_service_event_schedule(adapter);
+ }
+}
+
+/**
+ * txgbe_irq_enable - Enable default interrupt generation settings
+ * @adapter: board private structure
+ **/
+void txgbe_irq_enable(struct txgbe_adapter *adapter, bool queues, bool flush)
+{
+ u32 mask = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+ u8 device_type = hw->subsystem_id & 0xF0;
+
+ /* enable gpio interrupt */
+ if (device_type != TXGBE_ID_MAC_XAUI &&
+ device_type != TXGBE_ID_MAC_SGMII) {
+ mask |= TXGBE_GPIO_INTEN_2;
+ mask |= TXGBE_GPIO_INTEN_3;
+ mask |= TXGBE_GPIO_INTEN_6;
+ }
+ wr32(&adapter->hw, TXGBE_GPIO_INTEN, mask);
+
+ if (device_type != TXGBE_ID_MAC_XAUI &&
+ device_type != TXGBE_ID_MAC_SGMII) {
+ mask = TXGBE_GPIO_INTTYPE_LEVEL_2 | TXGBE_GPIO_INTTYPE_LEVEL_3 |
+ TXGBE_GPIO_INTTYPE_LEVEL_6;
+ }
+ wr32(&adapter->hw, TXGBE_GPIO_INTTYPE_LEVEL, mask);
+
+ /* enable misc interrupt */
+ mask = TXGBE_PX_MISC_IEN_MASK;
+
+ if (adapter->flags2 & TXGBE_FLAG2_TEMP_SENSOR_CAPABLE)
+ mask |= TXGBE_PX_MISC_IEN_OVER_HEAT;
+
+ if ((adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) &&
+ !(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT))
+ mask |= TXGBE_PX_MISC_IEN_FLOW_DIR;
+
+ mask |= TXGBE_PX_MISC_IEN_TIMESYNC;
+
+ wr32(&adapter->hw, TXGBE_PX_MISC_IEN, mask);
+
+ /* unmask interrupt */
+ txgbe_intr_enable(&adapter->hw, TXGBE_INTR_MISC(adapter));
+ if (queues)
+ txgbe_intr_enable(&adapter->hw, TXGBE_INTR_QALL(adapter));
+
+ /* flush configuration */
+ if (flush)
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+}
+
+static irqreturn_t txgbe_msix_other(int __always_unused irq, void *data)
+{
+ struct txgbe_adapter *adapter = data;
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 eicr;
+ u32 ecc;
+ u32 value = 0;
+ u16 pci_val = 0;
+
+ eicr = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+
+ if (BOND_CHECK_LINK_MODE == 1) {
+ if (eicr & (TXGBE_PX_MISC_IC_ETH_LKDN)) {
+ value = rd32(hw, 0x14404);
+ value = value & 0x1;
+ if (value == 0) {
+ adapter->link_up = false;
+ adapter->flags2 |= TXGBE_FLAG2_LINK_DOWN;
+ txgbe_service_event_schedule(adapter);
+ }
+ }
+ } else {
+ if (eicr & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+ txgbe_check_lsc(adapter);
+ }
+ if (eicr & TXGBE_PX_MISC_IC_ETH_AN) {
+ if (adapter->backplane_an == 1 && (KR_POLLING == 0)) {
+ value = txgbe_rd32_epcs(hw, 0x78002);
+ value = value & 0x4;
+ if (value == 0x4) {
+ txgbe_kr_intr_handle(adapter);
+ adapter->flags2 |= TXGBE_FLAG2_KR_TRAINING;
+ txgbe_service_event_schedule(adapter);
+ }
+ }
+ }
+
+ if (eicr & TXGBE_PX_MISC_IC_PCIE_REQ_ERR) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "lan id %d,PCIe request error founded.\n", hw->bus.lan_id);
+
+ pci_read_config_word(adapter->pdev, PCI_VENDOR_ID, &pci_val);
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci vendor id is 0x%x\n", pci_val);
+
+ pci_read_config_word(adapter->pdev, PCI_COMMAND, &pci_val);
+ ERROR_REPORT1(TXGBE_ERROR_POLLING, "pci command reg is 0x%x.\n", pci_val);
+
+ if (hw->bus.lan_id == 0) {
+ adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+ txgbe_service_event_schedule(adapter);
+ } else
+ wr32(&adapter->hw, TXGBE_MIS_PF_SM, 1);
+ }
+
+ if (eicr & TXGBE_PX_MISC_IC_INT_ERR) {
+ e_info(link, "Received unrecoverable ECC Err,"
+ "initiating reset.\n");
+ ecc = rd32(hw, TXGBE_MIS_ST);
+ if (((ecc & TXGBE_MIS_ST_LAN0_ECC) && (hw->bus.lan_id == 0)) ||
+ ((ecc & TXGBE_MIS_ST_LAN1_ECC) && (hw->bus.lan_id == 1)))
+ adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+
+ txgbe_service_event_schedule(adapter);
+ }
+ if (eicr & TXGBE_PX_MISC_IC_DEV_RST) {
+ adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+ txgbe_service_event_schedule(adapter);
+ }
+ if ((eicr & TXGBE_PX_MISC_IC_STALL) ||
+ (eicr & TXGBE_PX_MISC_IC_ETH_EVENT)) {
+ adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+ txgbe_service_event_schedule(adapter);
+ }
+
+ /* Handle Flow Director Full threshold interrupt */
+ if (eicr & TXGBE_PX_MISC_IC_FLOW_DIR) {
+ int reinit_count = 0;
+ int i;
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *ring = adapter->tx_ring[i];
+ if (test_and_clear_bit(__TXGBE_TX_FDIR_INIT_DONE,
+ &ring->state))
+ reinit_count++;
+ }
+ if (reinit_count) {
+ /* no more flow director interrupts until after init */
+ wr32m(hw, TXGBE_PX_MISC_IEN,
+ TXGBE_PX_MISC_IEN_FLOW_DIR, 0);
+ adapter->flags2 |=
+ TXGBE_FLAG2_FDIR_REQUIRES_REINIT;
+ txgbe_service_event_schedule(adapter);
+ }
+ }
+
+ txgbe_check_sfp_event(adapter, eicr);
+ txgbe_check_overtemp_event(adapter, eicr);
+
+ if (unlikely(eicr & TXGBE_PX_MISC_IC_TIMESYNC))
+ txgbe_ptp_check_pps_event(adapter);
+
+ /* re-enable the original interrupt state, no lsc, no queues */
+ if (!test_bit(__TXGBE_DOWN, &adapter->state))
+ txgbe_irq_enable(adapter, false, false);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t txgbe_msix_clean_rings(int __always_unused irq, void *data)
+{
+ struct txgbe_q_vector *q_vector = data;
+
+ /* EIAM disabled interrupts (on this vector) for us */
+
+ if (q_vector->rx.ring || q_vector->tx.ring)
+ napi_schedule_irqoff(&q_vector->napi);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_poll - NAPI polling RX/TX cleanup routine
+ * @napi: napi struct with our devices info in it
+ * @budget: amount of work driver is allowed to do this pass, in packets
+ *
+ * This function will clean all queues associated with a q_vector.
+ **/
+int txgbe_poll(struct napi_struct *napi, int budget)
+{
+ struct txgbe_q_vector *q_vector =
+ container_of(napi, struct txgbe_q_vector, napi);
+ struct txgbe_adapter *adapter = q_vector->adapter;
+ struct txgbe_ring *ring;
+ int per_ring_budget;
+ bool clean_complete = true;
+
+ txgbe_for_each_ring(ring, q_vector->tx) {
+ if (!txgbe_clean_tx_irq(q_vector, ring))
+ clean_complete = false;
+ }
+
+ /* Exit if we are called by netpoll */
+ if (budget <= 0)
+ return budget;
+
+ /* attempt to distribute budget to each queue fairly, but don't allow
+ * the budget to go below 1 because we'll exit polling */
+ if (q_vector->rx.count > 1)
+ per_ring_budget = max(budget/q_vector->rx.count, 1);
+ else
+ per_ring_budget = budget;
+
+ txgbe_for_each_ring(ring, q_vector->rx) {
+ int cleaned = txgbe_clean_rx_irq(q_vector, ring,
+ per_ring_budget);
+
+ if (cleaned >= per_ring_budget)
+ clean_complete = false;
+ }
+
+ /* If all work not completed, return budget and keep polling */
+ if (!clean_complete)
+ return budget;
+
+ /* all work done, exit the polling mode */
+ napi_complete(napi);
+ if (adapter->rx_itr_setting == 1)
+ txgbe_set_itr(q_vector);
+ if (!test_bit(__TXGBE_DOWN, &adapter->state))
+ txgbe_intr_enable(&adapter->hw,
+ TXGBE_INTR_Q(q_vector->v_idx));
+
+ return 0;
+}
+
+/**
+ * txgbe_request_msix_irqs - Initialize MSI-X interrupts
+ * @adapter: board private structure
+ *
+ * txgbe_request_msix_irqs allocates MSI-X vectors and requests
+ * interrupts from the kernel.
+ **/
+static int txgbe_request_msix_irqs(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ int vector, err;
+ int ri = 0, ti = 0;
+
+ for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+ struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+ struct msix_entry *entry = &adapter->msix_entries[vector];
+
+ if (q_vector->tx.ring && q_vector->rx.ring) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-TxRx-%d", netdev->name, ri++);
+ ti++;
+ } else if (q_vector->rx.ring) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-rx-%d", netdev->name, ri++);
+ } else if (q_vector->tx.ring) {
+ snprintf(q_vector->name, sizeof(q_vector->name) - 1,
+ "%s-tx-%d", netdev->name, ti++);
+ } else {
+ /* skip this unused q_vector */
+ continue;
+ }
+ err = request_irq(entry->vector, &txgbe_msix_clean_rings, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ e_err(probe, "request_irq failed for MSIX interrupt"
+ " '%s' Error: %d\n", q_vector->name, err);
+ goto free_queue_irqs;
+ }
+
+ /* If Flow Director is enabled, set interrupt affinity */
+ if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+ /* assign the mask for this irq */
+ irq_set_affinity_hint(entry->vector,
+ &q_vector->affinity_mask);
+ }
+ }
+
+ err = request_irq(adapter->msix_entries[vector].vector,
+ txgbe_msix_other, 0, netdev->name, adapter);
+ if (err) {
+ e_err(probe, "request_irq for msix_other failed: %d\n", err);
+ goto free_queue_irqs;
+ }
+
+ return 0;
+
+free_queue_irqs:
+ while (vector) {
+ vector--;
+ irq_set_affinity_hint(adapter->msix_entries[vector].vector,
+ NULL);
+ free_irq(adapter->msix_entries[vector].vector,
+ adapter->q_vector[vector]);
+ }
+ adapter->flags &= ~TXGBE_FLAG_MSIX_ENABLED;
+ pci_disable_msix(adapter->pdev);
+ kfree(adapter->msix_entries);
+ adapter->msix_entries = NULL;
+ return err;
+}
+
+/**
+ * txgbe_intr - legacy mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a network interface device structure
+ **/
+static irqreturn_t txgbe_intr(int __always_unused irq, void *data)
+{
+ struct txgbe_adapter *adapter = data;
+ struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 eicr;
+ u32 eicr_misc;
+ u32 value ;
+
+ eicr = txgbe_misc_isb(adapter, TXGBE_ISB_VEC0);
+ if (!eicr) {
+ /*
+ * shared interrupt alert!
+ * the interrupt that we masked before the EICR read.
+ */
+ if (!test_bit(__TXGBE_DOWN, &adapter->state))
+ txgbe_irq_enable(adapter, true, true);
+ return IRQ_NONE; /* Not our interrupt */
+ }
+ adapter->isb_mem[TXGBE_ISB_VEC0] = 0;
+ if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED))
+ wr32(&(adapter->hw), TXGBE_PX_INTA, 1);
+
+ eicr_misc = txgbe_misc_isb(adapter, TXGBE_ISB_MISC);
+ if (eicr_misc & (TXGBE_PX_MISC_IC_ETH_LK | TXGBE_PX_MISC_IC_ETH_LKDN))
+ txgbe_check_lsc(adapter);
+
+ if (eicr_misc & TXGBE_PX_MISC_IC_ETH_AN) {
+ if (adapter->backplane_an == 1 && (KR_POLLING == 0)) {
+ value = txgbe_rd32_epcs(hw, 0x78002);
+ value = value & 0x4;
+ if (value == 0x4) {
+ txgbe_kr_intr_handle(adapter);
+ adapter->flags2 |= TXGBE_FLAG2_KR_TRAINING;
+ txgbe_service_event_schedule(adapter);
+ }
+ }
+ }
+
+ if (eicr_misc & TXGBE_PX_MISC_IC_INT_ERR) {
+ e_info(link, "Received unrecoverable ECC Err,"
+ "initiating reset.\n");
+ adapter->flags2 |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+ txgbe_service_event_schedule(adapter);
+ }
+
+ if (eicr_misc & TXGBE_PX_MISC_IC_DEV_RST) {
+ adapter->flags2 |= TXGBE_FLAG2_RESET_INTR_RECEIVED;
+ txgbe_service_event_schedule(adapter);
+ }
+ txgbe_check_sfp_event(adapter, eicr_misc);
+ txgbe_check_overtemp_event(adapter, eicr_misc);
+
+ if (unlikely(eicr_misc & TXGBE_PX_MISC_IC_TIMESYNC))
+ txgbe_ptp_check_pps_event(adapter);
+
+ adapter->isb_mem[TXGBE_ISB_MISC] = 0;
+ /* would disable interrupts here but it is auto disabled */
+ napi_schedule_irqoff(&q_vector->napi);
+
+ /*
+ * re-enable link(maybe) and non-queue interrupts, no flush.
+ * txgbe_poll will re-enable the queue interrupts
+ */
+ if (!test_bit(__TXGBE_DOWN, &adapter->state))
+ txgbe_irq_enable(adapter, false, false);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * txgbe_request_irq - initialize interrupts
+ * @adapter: board private structure
+ *
+ * Attempts to configure interrupts using the best available
+ * capabilities of the hardware and kernel.
+ **/
+static int txgbe_request_irq(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ int err;
+
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+ err = txgbe_request_msix_irqs(adapter);
+ else if (adapter->flags & TXGBE_FLAG_MSI_ENABLED)
+ err = request_irq(adapter->pdev->irq, &txgbe_intr, 0,
+ netdev->name, adapter);
+ else
+ err = request_irq(adapter->pdev->irq, &txgbe_intr, IRQF_SHARED,
+ netdev->name, adapter);
+
+ if (err)
+ e_err(probe, "request_irq failed, Error %d\n", err);
+
+ return err;
+}
+
+static void txgbe_free_irq(struct txgbe_adapter *adapter)
+{
+ int vector;
+
+ if (!(adapter->flags & TXGBE_FLAG_MSIX_ENABLED)) {
+ free_irq(adapter->pdev->irq, adapter);
+ return;
+ }
+
+ for (vector = 0; vector < adapter->num_q_vectors; vector++) {
+ struct txgbe_q_vector *q_vector = adapter->q_vector[vector];
+ struct msix_entry *entry = &adapter->msix_entries[vector];
+
+ /* free only the irqs that were actually requested */
+ if (!q_vector->rx.ring && !q_vector->tx.ring)
+ continue;
+
+ /* clear the affinity_mask in the IRQ descriptor */
+ irq_set_affinity_hint(entry->vector, NULL);
+ free_irq(entry->vector, q_vector);
+ }
+
+ free_irq(adapter->msix_entries[vector++].vector, adapter);
+}
+
+/**
+ * txgbe_irq_disable - Mask off interrupt generation on the NIC
+ * @adapter: board private structure
+ **/
+void txgbe_irq_disable(struct txgbe_adapter *adapter)
+{
+ wr32(&adapter->hw, TXGBE_PX_MISC_IEN, 0);
+ txgbe_intr_disable(&adapter->hw, TXGBE_INTR_ALL);
+
+ TXGBE_WRITE_FLUSH(&adapter->hw);
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+ int vector;
+
+ for (vector = 0; vector < adapter->num_q_vectors; vector++)
+ synchronize_irq(adapter->msix_entries[vector].vector);
+
+ synchronize_irq(adapter->msix_entries[vector++].vector);
+ } else {
+ synchronize_irq(adapter->pdev->irq);
+ }
+}
+
+/**
+ * txgbe_configure_msi_and_legacy - Initialize PIN (INTA...) and MSI interrupts
+ *
+ **/
+static void txgbe_configure_msi_and_legacy(struct txgbe_adapter *adapter)
+{
+ struct txgbe_q_vector *q_vector = adapter->q_vector[0];
+ struct txgbe_ring *ring;
+
+ txgbe_write_eitr(q_vector);
+
+ txgbe_for_each_ring(ring, q_vector->rx)
+ txgbe_set_ivar(adapter, 0, ring->reg_idx, 0);
+
+ txgbe_for_each_ring(ring, q_vector->tx)
+ txgbe_set_ivar(adapter, 1, ring->reg_idx, 0);
+
+ txgbe_set_ivar(adapter, -1, 0, 1);
+
+ e_info(hw, "Legacy interrupt IVAR setup done\n");
+}
+
+/**
+ * txgbe_configure_tx_ring - Configure Tx ring after Reset
+ * @adapter: board private structure
+ * @ring: structure containing ring specific data
+ *
+ * Configure the Tx descriptor ring after a reset.
+ **/
+void txgbe_configure_tx_ring(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u64 tdba = ring->dma;
+ int wait_loop = 10;
+ u32 txdctl = TXGBE_PX_TR_CFG_ENABLE;
+ u8 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ wr32(hw, TXGBE_PX_TR_CFG(reg_idx), TXGBE_PX_TR_CFG_SWFLSH);
+ TXGBE_WRITE_FLUSH(hw);
+
+ wr32(hw, TXGBE_PX_TR_BAL(reg_idx), tdba & DMA_BIT_MASK(32));
+ wr32(hw, TXGBE_PX_TR_BAH(reg_idx), tdba >> 32);
+
+ /* reset head and tail pointers */
+ wr32(hw, TXGBE_PX_TR_RP(reg_idx), 0);
+ wr32(hw, TXGBE_PX_TR_WP(reg_idx), 0);
+ ring->tail = adapter->io_addr + TXGBE_PX_TR_WP(reg_idx);
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+
+ txdctl |= TXGBE_RING_SIZE(ring) << TXGBE_PX_TR_CFG_TR_SIZE_SHIFT;
+
+ /*
+ * set WTHRESH to encourage burst writeback, it should not be set
+ * higher than 1 when:
+ * - ITR is 0 as it could cause false TX hangs
+ * - ITR is set to > 100k int/sec and BQL is enabled
+ *
+ * In order to avoid issues WTHRESH + PTHRESH should always be equal
+ * to or less than the number of on chip descriptors, which is
+ * currently 40.
+ */
+
+ txdctl |= 0x20 << TXGBE_PX_TR_CFG_WTHRESH_SHIFT;
+
+ /* reinitialize flowdirector state */
+ if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+ ring->atr_sample_rate = adapter->atr_sample_rate;
+ ring->atr_count = 0;
+ set_bit(__TXGBE_TX_FDIR_INIT_DONE, &ring->state);
+ } else {
+ ring->atr_sample_rate = 0;
+ }
+
+ /* initialize XPS */
+ if (!test_and_set_bit(__TXGBE_TX_XPS_INIT_DONE, &ring->state)) {
+ struct txgbe_q_vector *q_vector = ring->q_vector;
+
+ if (q_vector)
+ netif_set_xps_queue(adapter->netdev,
+ &q_vector->affinity_mask,
+ ring->queue_index);
+ }
+
+ clear_bit(__TXGBE_HANG_CHECK_ARMED, &ring->state);
+
+ /* enable queue */
+ wr32(hw, TXGBE_PX_TR_CFG(reg_idx), txdctl);
+
+
+ /* poll to verify queue is enabled */
+ do {
+ msleep(1);
+ txdctl = rd32(hw, TXGBE_PX_TR_CFG(reg_idx));
+ } while (--wait_loop && !(txdctl & TXGBE_PX_TR_CFG_ENABLE));
+ if (!wait_loop)
+ e_err(drv, "Could not enable Tx Queue %d\n", reg_idx);
+}
+
+/**
+ * txgbe_configure_tx - Configure Transmit Unit after Reset
+ * @adapter: board private structure
+ *
+ * Configure the Tx unit of the MAC after a reset.
+ **/
+static void txgbe_configure_tx(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 i;
+
+ /* TDM_CTL.TE must be before Tx queues are enabled */
+ wr32m(hw, TXGBE_TDM_CTL,
+ TXGBE_TDM_CTL_TE, TXGBE_TDM_CTL_TE);
+
+ /* Setup the HW Tx Head and Tail descriptor pointers */
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ txgbe_configure_tx_ring(adapter, adapter->tx_ring[i]);
+
+ wr32m(hw, TXGBE_TSC_BUF_AE, 0x3FF, 0x10);
+ /* enable mac transmitter */
+ wr32m(hw, TXGBE_MAC_TX_CFG,
+ TXGBE_MAC_TX_CFG_TE, TXGBE_MAC_TX_CFG_TE);
+}
+
+static void txgbe_enable_rx_drop(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 reg_idx = ring->reg_idx;
+
+ u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+
+ srrctl |= TXGBE_PX_RR_CFG_DROP_EN;
+
+ wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+static void txgbe_disable_rx_drop(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 reg_idx = ring->reg_idx;
+
+ u32 srrctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+
+ srrctl &= ~TXGBE_PX_RR_CFG_DROP_EN;
+
+ wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+void txgbe_set_rx_drop_en(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ /*
+ * We should set the drop enable bit if:
+ * SR-IOV is enabled
+ * or
+ * Number of Rx queues > 1 and flow control is disabled
+ *
+ * This allows us to avoid head of line blocking for security
+ * and performance reasons.
+ */
+ if (adapter->num_vfs || (adapter->num_rx_queues > 1 &&
+ !(adapter->hw.fc.current_mode & txgbe_fc_tx_pause))) {
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ txgbe_enable_rx_drop(adapter, adapter->rx_ring[i]);
+ } else {
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ txgbe_disable_rx_drop(adapter, adapter->rx_ring[i]);
+ }
+}
+
+static void txgbe_configure_srrctl(struct txgbe_adapter *adapter,
+ struct txgbe_ring *rx_ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 srrctl;
+ u16 reg_idx = rx_ring->reg_idx;
+
+ srrctl = rd32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+ ~(TXGBE_PX_RR_CFG_RR_HDR_SZ |
+ TXGBE_PX_RR_CFG_RR_BUF_SZ |
+ TXGBE_PX_RR_CFG_SPLIT_MODE));
+ /* configure header buffer length, needed for RSC */
+ srrctl |= TXGBE_RX_HDR_SIZE << TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT;
+
+ /* configure the packet buffer length */
+ srrctl |= txgbe_rx_bufsz(rx_ring) >> TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT;
+ if (ring_is_hs_enabled(rx_ring))
+ srrctl |= TXGBE_PX_RR_CFG_SPLIT_MODE;
+
+ wr32(hw, TXGBE_PX_RR_CFG(reg_idx), srrctl);
+}
+
+/**
+ * Return a number of entries in the RSS indirection table
+ *
+ * @adapter: device handle
+ *
+ */
+u32 txgbe_rss_indir_tbl_entries(struct txgbe_adapter *adapter)
+{
+ return 128;
+}
+
+/**
+ * Write the RETA table to HW
+ *
+ * @adapter: device handle
+ *
+ * Write the RSS redirection table stored in adapter.rss_indir_tbl[] to HW.
+ */
+void txgbe_store_reta(struct txgbe_adapter *adapter)
+{
+ u32 i, reta_entries = txgbe_rss_indir_tbl_entries(adapter);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 reta = 0;
+ u8 *indir_tbl = adapter->rss_indir_tbl;
+
+ /* Fill out the redirection table as follows:
+ * - 8 bit wide entries containing 4 bit RSS index
+ */
+
+ /* Write redirection table to HW */
+ for (i = 0; i < reta_entries; i++) {
+ reta |= indir_tbl[i] << (i & 0x3) * 8;
+ if ((i & 3) == 3) {
+ wr32(hw, TXGBE_RDB_RSSTBL(i >> 2), reta);
+ reta = 0;
+ }
+ }
+}
+
+/**
+ * Write the RETA table to HW (for devices in SRIOV mode)
+ *
+ * @adapter: device handle
+ *
+ * Write the RSS redirection table stored in adapter.rss_indir_tbl[] to HW.
+ */
+//static void txgbe_store_vfreta(struct txgbe_adapter *adapter)
+
+static void txgbe_setup_reta(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 i, j;
+ u32 reta_entries = txgbe_rss_indir_tbl_entries(adapter);
+ u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
+
+ /*
+ * Program table for at least 4 queues w/ SR-IOV so that VFs can
+ * make full use of any rings they may have. We will use the
+ * PSRTYPE register to control how many rings we use within the PF.
+ */
+ if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
+ rss_i = 2;
+
+ /* Fill out hash function seeds */
+ for (i = 0; i < 10; i++)
+ wr32(hw, TXGBE_RDB_RSSRK(i), adapter->rss_key[i]);
+
+ /* Fill out redirection table */
+ memset(adapter->rss_indir_tbl, 0, sizeof(adapter->rss_indir_tbl));
+
+ for (i = 0, j = 0; i < reta_entries; i++, j++) {
+ if (j == rss_i)
+ j = 0;
+
+ adapter->rss_indir_tbl[i] = j;
+ }
+
+ txgbe_store_reta(adapter);
+}
+
+static void txgbe_setup_mrqc(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 rss_field = 0;
+
+ /* VT, DCB and RSS do not coexist at the same time */
+ if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED &&
+ adapter->flags & TXGBE_FLAG_DCB_ENABLED)
+ return;
+
+ /* Disable indicating checksum in descriptor, enables RSS hash */
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_PCSD, TXGBE_PSR_CTL_PCSD);
+
+ /* Perform hash on these packet types */
+ rss_field = TXGBE_RDB_RA_CTL_RSS_IPV4 |
+ TXGBE_RDB_RA_CTL_RSS_IPV4_TCP |
+ TXGBE_RDB_RA_CTL_RSS_IPV6 |
+ TXGBE_RDB_RA_CTL_RSS_IPV6_TCP;
+
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV4_UDP)
+ rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV4_UDP;
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_FIELD_IPV6_UDP)
+ rss_field |= TXGBE_RDB_RA_CTL_RSS_IPV6_UDP;
+
+ netdev_rss_key_fill(adapter->rss_key, sizeof(adapter->rss_key));
+
+ if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) {
+ /*wait to fix txgbe_setup_vfreta(adapter);*/
+ txgbe_setup_reta(adapter);
+ } else {
+ txgbe_setup_reta(adapter);
+ }
+
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)
+ rss_field |= TXGBE_RDB_RA_CTL_RSS_EN;
+
+ wr32(hw, TXGBE_RDB_RA_CTL, rss_field);
+}
+
+/**
+ * txgbe_clear_rscctl - disable RSC for the indicated ring
+ * @adapter: address of board private structure
+ * @ring: structure containing ring specific data
+ **/
+void txgbe_clear_rscctl(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u8 reg_idx = ring->reg_idx;
+
+ wr32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+ TXGBE_PX_RR_CFG_RSC, 0);
+
+ clear_ring_rsc_enabled(ring);
+}
+
+/**
+ * txgbe_configure_rscctl - enable RSC for the indicated ring
+ * @adapter: address of board private structure
+ * @ring: structure containing ring specific data
+ **/
+void txgbe_configure_rscctl(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 rscctrl;
+ u8 reg_idx = ring->reg_idx;
+
+ if (!ring_is_rsc_enabled(ring))
+ return;
+
+ rscctrl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+ rscctrl |= TXGBE_PX_RR_CFG_RSC;
+ /*
+ * we must limit the number of descriptors so that the
+ * total size of max desc * buf_len is not greater
+ * than 65536
+ */
+#if (MAX_SKB_FRAGS >= 16)
+ rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_16;
+#elif (MAX_SKB_FRAGS >= 8)
+ rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_8;
+#elif (MAX_SKB_FRAGS >= 4)
+ rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_4;
+#else
+ rscctrl |= TXGBE_PX_RR_CFG_MAX_RSCBUF_1;
+#endif
+ wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rscctrl);
+}
+
+static void txgbe_rx_desc_queue_enable(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+ u32 rxdctl;
+ u8 reg_idx = ring->reg_idx;
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return;
+
+ do {
+ msleep(1);
+ rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+ } while (--wait_loop && !(rxdctl & TXGBE_PX_RR_CFG_RR_EN));
+
+ if (!wait_loop) {
+ e_err(drv, "RXDCTL.ENABLE on Rx queue %d "
+ "not set within the polling period\n", reg_idx);
+ }
+}
+
+/* disable the specified tx ring/queue */
+void txgbe_disable_tx_queue(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+ u32 rxdctl, reg_offset, enable_mask;
+ u8 reg_idx = ring->reg_idx;
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return;
+
+ reg_offset = TXGBE_PX_TR_CFG(reg_idx);
+ enable_mask = TXGBE_PX_TR_CFG_ENABLE;
+
+ /* write value back with TDCFG.ENABLE bit cleared */
+ wr32m(hw, reg_offset, enable_mask, 0);
+
+ /* the hardware may take up to 100us to really disable the tx queue */
+ do {
+ udelay(10);
+ rxdctl = rd32(hw, reg_offset);
+ } while (--wait_loop && (rxdctl & enable_mask));
+
+ if (!wait_loop) {
+ e_err(drv, "TDCFG.ENABLE on Tx queue %d not cleared within "
+ "the polling period\n", reg_idx);
+ }
+}
+
+/* disable the specified rx ring/queue */
+void txgbe_disable_rx_queue(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int wait_loop = TXGBE_MAX_RX_DESC_POLL;
+ u32 rxdctl;
+ u8 reg_idx = ring->reg_idx;
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return;
+
+ /* write value back with RXDCTL.ENABLE bit cleared */
+ wr32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+ TXGBE_PX_RR_CFG_RR_EN, 0);
+
+ /* the hardware may take up to 100us to really disable the rx queue */
+ do {
+ udelay(10);
+ rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+ } while (--wait_loop && (rxdctl & TXGBE_PX_RR_CFG_RR_EN));
+
+ if (!wait_loop) {
+ e_err(drv, "RXDCTL.ENABLE on Rx queue %d not cleared within "
+ "the polling period\n", reg_idx);
+ }
+}
+
+void txgbe_configure_rx_ring(struct txgbe_adapter *adapter,
+ struct txgbe_ring *ring)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u64 rdba = ring->dma;
+ u32 rxdctl;
+ u16 reg_idx = ring->reg_idx;
+
+ /* disable queue to avoid issues while updating state */
+ rxdctl = rd32(hw, TXGBE_PX_RR_CFG(reg_idx));
+ txgbe_disable_rx_queue(adapter, ring);
+
+ wr32(hw, TXGBE_PX_RR_BAL(reg_idx), rdba & DMA_BIT_MASK(32));
+ wr32(hw, TXGBE_PX_RR_BAH(reg_idx), rdba >> 32);
+
+ if (ring->count == TXGBE_MAX_RXD)
+ rxdctl |= 0 << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT;
+ else
+ rxdctl |= (ring->count / 128) << TXGBE_PX_RR_CFG_RR_SIZE_SHIFT;
+
+ rxdctl |= 0x1 << TXGBE_PX_RR_CFG_RR_THER_SHIFT;
+ wr32(hw, TXGBE_PX_RR_CFG(reg_idx), rxdctl);
+
+ /* reset head and tail pointers */
+ wr32(hw, TXGBE_PX_RR_RP(reg_idx), 0);
+ wr32(hw, TXGBE_PX_RR_WP(reg_idx), 0);
+ ring->tail = adapter->io_addr + TXGBE_PX_RR_WP(reg_idx);
+
+ /* reset ntu and ntc to place SW in sync with hardwdare */
+ ring->next_to_clean = 0;
+ ring->next_to_use = 0;
+ ring->next_to_alloc = 0;
+
+ txgbe_configure_srrctl(adapter, ring);
+ /* In ESX, RSCCTL configuration is done by on demand */
+ txgbe_configure_rscctl(adapter, ring);
+
+ /* enable receive descriptor ring */
+ wr32m(hw, TXGBE_PX_RR_CFG(reg_idx),
+ TXGBE_PX_RR_CFG_RR_EN, TXGBE_PX_RR_CFG_RR_EN);
+
+ txgbe_rx_desc_queue_enable(adapter, ring);
+ txgbe_alloc_rx_buffers(ring, txgbe_desc_unused(ring));
+}
+
+static void txgbe_setup_psrtype(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int rss_i = adapter->ring_feature[RING_F_RSS].indices;
+ int pool;
+
+ /* PSRTYPE must be initialized in adapters */
+ u32 psrtype = TXGBE_RDB_PL_CFG_L4HDR |
+ TXGBE_RDB_PL_CFG_L3HDR |
+ TXGBE_RDB_PL_CFG_L2HDR |
+ TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR |
+ TXGBE_RDB_PL_CFG_TUN_TUNHDR;
+
+ if (rss_i > 3)
+ psrtype |= 2 << 29;
+ else if (rss_i > 1)
+ psrtype |= 1 << 29;
+
+ for_each_set_bit(pool, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS)
+ wr32(hw, TXGBE_RDB_PL_CFG(VMDQ_P(pool)), psrtype);
+}
+
+/**
+ * txgbe_configure_bridge_mode - common settings for configuring bridge mode
+ * @adapter - the private structure
+ *
+ * This function's purpose is to remove code duplication and configure some
+ * settings require to switch bridge modes.
+ **/
+static void txgbe_configure_bridge_mode(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ unsigned int p;
+
+ if (adapter->flags & TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE) {
+ /* disable Tx loopback, rely on switch hairpin mode */
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_SW_EN, 0);
+
+ /* enable Rx source address pruning. Note, this requires
+ * replication to be enabled or else it does nothing.
+ */
+ for (p = 0; p < adapter->num_vfs; p++) {
+ TCALL(hw, mac.ops.set_source_address_pruning, true, p);
+ }
+
+ for_each_set_bit(p, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) {
+ TCALL(hw, mac.ops.set_source_address_pruning, true, VMDQ_P(p));
+ }
+ } else {
+ /* enable Tx loopback for internal VF/PF communication */
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_SW_EN, TXGBE_PSR_CTL_SW_EN);
+
+ /* disable Rx source address pruning, since we don't expect to
+ * be receiving external loopback of our transmitted frames.
+ */
+ for (p = 0; p < adapter->num_vfs; p++) {
+ TCALL(hw, mac.ops.set_source_address_pruning, false, p);
+ }
+
+ for_each_set_bit(p, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) {
+ TCALL(hw, mac.ops.set_source_address_pruning, false, VMDQ_P(p));
+ }
+ }
+}
+
+static void txgbe_configure_virtualization(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 reg_offset, vf_shift;
+ u32 i;
+
+ if (!(adapter->flags & TXGBE_FLAG_VMDQ_ENABLED))
+ return;
+
+ wr32m(hw, TXGBE_PSR_VM_CTL,
+ TXGBE_PSR_VM_CTL_POOL_MASK |
+ TXGBE_PSR_VM_CTL_REPLEN,
+ VMDQ_P(0) << TXGBE_PSR_VM_CTL_POOL_SHIFT |
+ TXGBE_PSR_VM_CTL_REPLEN);
+
+ for_each_set_bit(i, &adapter->fwd_bitmask, TXGBE_MAX_MACVLANS) {
+ /* accept untagged packets until a vlan tag is
+ * specifically set for the VMDQ queue/pool
+ */
+ wr32m(hw, TXGBE_PSR_VM_L2CTL(i),
+ TXGBE_PSR_VM_L2CTL_AUPE, TXGBE_PSR_VM_L2CTL_AUPE);
+ }
+
+ vf_shift = VMDQ_P(0) % 32;
+ reg_offset = (VMDQ_P(0) >= 32) ? 1 : 0;
+
+ /* Enable only the PF pools for Tx/Rx */
+ wr32(hw, TXGBE_RDM_VF_RE(reg_offset), (~0) << vf_shift);
+ wr32(hw, TXGBE_RDM_VF_RE(reg_offset ^ 1), reg_offset - 1);
+ wr32(hw, TXGBE_TDM_VF_TE(reg_offset), (~0) << vf_shift);
+ wr32(hw, TXGBE_TDM_VF_TE(reg_offset ^ 1), reg_offset - 1);
+
+ if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED))
+ return;
+
+ /* configure default bridge settings */
+ txgbe_configure_bridge_mode(adapter);
+
+ /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be
+ * calling set_ethertype_anti_spoofing for each VF in loop below.
+ */
+ if (hw->mac.ops.set_ethertype_anti_spoofing) {
+ wr32(hw,
+ TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_LLDP),
+ (TXGBE_PSR_ETYPE_SWC_FILTER_EN | /* enable filter */
+ TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF |
+ TXGBE_ETH_P_LLDP)); /* LLDP eth procotol type */
+
+ wr32(hw,
+ TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_FC),
+ (TXGBE_PSR_ETYPE_SWC_FILTER_EN |
+ TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF |
+ ETH_P_PAUSE));
+ }
+
+ for (i = 0; i < adapter->num_vfs; i++) {
+ /* enable ethertype anti spoofing if hw supports it */
+ TCALL(hw, mac.ops.set_ethertype_anti_spoofing, true, i);
+ }
+}
+
+static void txgbe_set_rx_buffer_len(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct net_device *netdev = adapter->netdev;
+ u32 max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+ struct txgbe_ring *rx_ring;
+ int i;
+ u32 mhadd;
+
+ /* adjust max frame to be at least the size of a standard frame */
+ if (max_frame < (ETH_FRAME_LEN + ETH_FCS_LEN))
+ max_frame = (ETH_FRAME_LEN + ETH_FCS_LEN);
+
+ mhadd = rd32(hw, TXGBE_PSR_MAX_SZ);
+ if (max_frame != mhadd) {
+ wr32(hw, TXGBE_PSR_MAX_SZ, max_frame);
+ }
+
+ /*
+ * Setup the HW Rx Head and Tail Descriptor Pointers and
+ * the Base and Length of the Rx Descriptor Ring
+ */
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ rx_ring = adapter->rx_ring[i];
+
+ if (adapter->flags & TXGBE_FLAG_RX_HS_ENABLED) {
+ rx_ring->rx_buf_len = TXGBE_RX_HDR_SIZE;
+ set_ring_hs_enabled(rx_ring);
+ } else
+ clear_ring_hs_enabled(rx_ring);
+
+ if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+ set_ring_rsc_enabled(rx_ring);
+ else
+ clear_ring_rsc_enabled(rx_ring);
+ }
+}
+
+/**
+ * txgbe_configure_rx - Configure Receive Unit after Reset
+ * @adapter: board private structure
+ *
+ * Configure the Rx unit of the MAC after a reset.
+ **/
+static void txgbe_configure_rx(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i;
+ u32 rxctrl, psrctl;
+
+ /* disable receives while setting up the descriptors */
+ TCALL(hw, mac.ops.disable_rx);
+
+ txgbe_setup_psrtype(adapter);
+
+ /* enable hw crc stripping */
+ wr32m(hw, TXGBE_RSC_CTL,
+ TXGBE_RSC_CTL_CRC_STRIP, TXGBE_RSC_CTL_CRC_STRIP);
+
+ /* RSC Setup */
+ psrctl = rd32m(hw, TXGBE_PSR_CTL, ~TXGBE_PSR_CTL_RSC_DIS);
+ psrctl |= TXGBE_PSR_CTL_RSC_ACK; /* Disable RSC for ACK packets */
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED))
+ psrctl |= TXGBE_PSR_CTL_RSC_DIS;
+ wr32(hw, TXGBE_PSR_CTL, psrctl);
+
+ /* Program registers for the distribution of queues */
+ txgbe_setup_mrqc(adapter);
+
+ /* set_rx_buffer_len must be called before ring initialization */
+ txgbe_set_rx_buffer_len(adapter);
+
+ /*
+ * Setup the HW Rx Head and Tail Descriptor Pointers and
+ * the Base and Length of the Rx Descriptor Ring
+ */
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ txgbe_configure_rx_ring(adapter, adapter->rx_ring[i]);
+
+ rxctrl = rd32(hw, TXGBE_RDB_PB_CTL);
+
+ /* enable all receives */
+ rxctrl |= TXGBE_RDB_PB_CTL_RXEN;
+ TCALL(hw, mac.ops.enable_rx_dma, rxctrl);
+}
+
+static int txgbe_vlan_rx_add_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ int pool_ndx = VMDQ_P(0);
+
+ /* add VID to filter table */
+ if (hw->mac.ops.set_vfta) {
+ if (vid < VLAN_N_VID)
+ set_bit(vid, adapter->active_vlans);
+ TCALL(hw, mac.ops.set_vfta, vid, pool_ndx, true);
+ if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) {
+ int i;
+ /* enable vlan id for all pools */
+ for_each_set_bit(i, &adapter->fwd_bitmask,
+ TXGBE_MAX_MACVLANS)
+ TCALL(hw, mac.ops.set_vfta, vid,
+ VMDQ_P(i), true);
+ }
+ }
+
+ return 0;
+}
+
+static int txgbe_vlan_rx_kill_vid(struct net_device *netdev,
+ __always_unused __be16 proto, u16 vid)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ int pool_ndx = VMDQ_P(0);
+
+ /* User is not allowed to remove vlan ID 0 */
+ if (!vid)
+ return 0;
+
+ /* remove VID from filter table */
+ if (hw->mac.ops.set_vfta) {
+ TCALL(hw, mac.ops.set_vfta, vid, pool_ndx, false);
+ if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) {
+ int i;
+ /* remove vlan id from all pools */
+ for_each_set_bit(i, &adapter->fwd_bitmask,
+ TXGBE_MAX_MACVLANS)
+ TCALL(hw, mac.ops.set_vfta, vid,
+ VMDQ_P(i), false);
+ }
+ }
+
+ clear_bit(vid, adapter->active_vlans);
+
+ return 0;
+}
+
+#ifdef HAVE_8021P_SUPPORT
+/**
+ * txgbe_vlan_strip_disable - helper to disable vlan tag stripping
+ * @adapter: driver data
+ */
+void txgbe_vlan_strip_disable(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i, j;
+
+ /* leave vlan tag stripping enabled for DCB */
+ if (adapter->flags & TXGBE_FLAG_DCB_ENABLED)
+ return;
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ struct txgbe_ring *ring = adapter->rx_ring[i];
+ if (ring->accel)
+ continue;
+ j = ring->reg_idx;
+ wr32m(hw, TXGBE_PX_RR_CFG(j),
+ TXGBE_PX_RR_CFG_VLAN, 0);
+ }
+}
+
+#endif
+/**
+ * txgbe_vlan_strip_enable - helper to enable vlan tag stripping
+ * @adapter: driver data
+ */
+void txgbe_vlan_strip_enable(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i, j;
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ struct txgbe_ring *ring = adapter->rx_ring[i];
+ if (ring->accel)
+ continue;
+ j = ring->reg_idx;
+ wr32m(hw, TXGBE_PX_RR_CFG(j),
+ TXGBE_PX_RR_CFG_VLAN, TXGBE_PX_RR_CFG_VLAN);
+ }
+}
+
+void txgbe_vlan_mode(struct net_device *netdev, u32 features)
+{
+#if defined(HAVE_8021P_SUPPORT)
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+#endif
+#ifdef HAVE_8021P_SUPPORT
+ bool enable;
+#endif
+
+#ifdef HAVE_8021P_SUPPORT
+ enable = !!(features & (NETIF_F_HW_VLAN_CTAG_RX));
+
+ if (enable)
+ /* enable VLAN tag insert/strip */
+ txgbe_vlan_strip_enable(adapter);
+ else
+ /* disable VLAN tag insert/strip */
+ txgbe_vlan_strip_disable(adapter);
+
+#endif /* HAVE_8021P_SUPPORT */
+}
+
+static void txgbe_restore_vlan(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ u16 vid;
+
+ txgbe_vlan_mode(netdev, netdev->features);
+
+ for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)
+ txgbe_vlan_rx_add_vid(netdev, htons(ETH_P_8021Q), vid);
+}
+
+static u8 *txgbe_addr_list_itr(struct txgbe_hw __maybe_unused *hw,
+ u8 **mc_addr_ptr, u32 *vmdq)
+{
+ struct netdev_hw_addr *mc_ptr;
+ u8 *addr = *mc_addr_ptr;
+
+ /* VMDQ_P implicitely uses the adapter struct when CONFIG_PCI_IOV is
+ * defined, so we have to wrap the pointer above correctly to prevent
+ * a warning.
+ */
+ *vmdq = VMDQ_P(0);
+
+ mc_ptr = container_of(addr, struct netdev_hw_addr, addr[0]);
+ if (mc_ptr->list.next) {
+ struct netdev_hw_addr *ha;
+ ha = list_entry(mc_ptr->list.next, struct netdev_hw_addr, list);
+ *mc_addr_ptr = ha->addr;
+ } else
+ *mc_addr_ptr = NULL;
+
+ return addr;
+}
+
+/**
+ * txgbe_write_mc_addr_list - write multicast addresses to MTA
+ * @netdev: network interface device structure
+ *
+ * Writes multicast address list to the MTA hash table.
+ * Returns: -ENOMEM on failure
+ * 0 on no addresses written
+ * X on writing X addresses to MTA
+ **/
+int txgbe_write_mc_addr_list(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ struct netdev_hw_addr *ha;
+ u8 *addr_list = NULL;
+ int addr_count = 0;
+
+ if (!hw->mac.ops.update_mc_addr_list)
+ return -ENOMEM;
+
+ if (!netif_running(netdev))
+ return 0;
+
+
+ if (netdev_mc_empty(netdev)) {
+ TCALL(hw, mac.ops.update_mc_addr_list, NULL, 0,
+ txgbe_addr_list_itr, true);
+ } else {
+ ha = list_first_entry(&netdev->mc.list,
+ struct netdev_hw_addr, list);
+ addr_list = ha->addr;
+ addr_count = netdev_mc_count(netdev);
+
+ TCALL(hw, mac.ops.update_mc_addr_list, addr_list, addr_count,
+ txgbe_addr_list_itr, true);
+ }
+
+ return addr_count;
+}
+
+
+void txgbe_full_sync_mac_table(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i;
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) {
+ TCALL(hw, mac.ops.set_rar, i,
+ adapter->mac_table[i].addr,
+ adapter->mac_table[i].pools,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+ } else {
+ TCALL(hw, mac.ops.clear_rar, i);
+ }
+ adapter->mac_table[i].state &= ~(TXGBE_MAC_STATE_MODIFIED);
+ }
+}
+
+static void txgbe_sync_mac_table(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i;
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ if (adapter->mac_table[i].state & TXGBE_MAC_STATE_MODIFIED) {
+ if (adapter->mac_table[i].state &
+ TXGBE_MAC_STATE_IN_USE) {
+ TCALL(hw, mac.ops.set_rar, i,
+ adapter->mac_table[i].addr,
+ adapter->mac_table[i].pools,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+ } else {
+ TCALL(hw, mac.ops.clear_rar, i);
+ }
+ adapter->mac_table[i].state &=
+ ~(TXGBE_MAC_STATE_MODIFIED);
+ }
+ }
+}
+
+int txgbe_available_rars(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 i, count = 0;
+
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ if (adapter->mac_table[i].state == 0)
+ count++;
+ }
+ return count;
+}
+
+/* this function destroys the first RAR entry */
+static void txgbe_mac_set_default_filter(struct txgbe_adapter *adapter,
+ u8 *addr)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ memcpy(&adapter->mac_table[0].addr, addr, ETH_ALEN);
+ adapter->mac_table[0].pools = 1ULL << VMDQ_P(0);
+ adapter->mac_table[0].state = (TXGBE_MAC_STATE_DEFAULT |
+ TXGBE_MAC_STATE_IN_USE);
+ TCALL(hw, mac.ops.set_rar, 0, adapter->mac_table[0].addr,
+ adapter->mac_table[0].pools,
+ TXGBE_PSR_MAC_SWC_AD_H_AV);
+}
+
+int txgbe_add_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 i;
+
+ if (is_zero_ether_addr(addr))
+ return -EINVAL;
+
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ if (adapter->mac_table[i].state & TXGBE_MAC_STATE_IN_USE) {
+ continue;
+ }
+ adapter->mac_table[i].state |= (TXGBE_MAC_STATE_MODIFIED |
+ TXGBE_MAC_STATE_IN_USE);
+ memcpy(adapter->mac_table[i].addr, addr, ETH_ALEN);
+ adapter->mac_table[i].pools = (1ULL << pool);
+ txgbe_sync_mac_table(adapter);
+ return i;
+ }
+ return -ENOMEM;
+}
+
+static void txgbe_flush_sw_mac_table(struct txgbe_adapter *adapter)
+{
+ u32 i;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+ adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+ memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
+ adapter->mac_table[i].pools = 0;
+ }
+ txgbe_sync_mac_table(adapter);
+}
+
+int txgbe_del_mac_filter(struct txgbe_adapter *adapter, u8 *addr, u16 pool)
+{
+ /* search table for addr, if found, set to 0 and sync */
+ u32 i;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (is_zero_ether_addr(addr))
+ return -EINVAL;
+
+ for (i = 0; i < hw->mac.num_rar_entries; i++) {
+ if (ether_addr_equal(addr, adapter->mac_table[i].addr) &&
+ adapter->mac_table[i].pools | (1ULL << pool)) {
+ adapter->mac_table[i].state |= TXGBE_MAC_STATE_MODIFIED;
+ adapter->mac_table[i].state &= ~TXGBE_MAC_STATE_IN_USE;
+ memset(adapter->mac_table[i].addr, 0, ETH_ALEN);
+ adapter->mac_table[i].pools = 0;
+ txgbe_sync_mac_table(adapter);
+ return 0;
+ }
+ }
+ return -ENOMEM;
+}
+
+/**
+ * txgbe_write_uc_addr_list - write unicast addresses to RAR table
+ * @netdev: network interface device structure
+ *
+ * Writes unicast address list to the RAR table.
+ * Returns: -ENOMEM on failure/insufficient address space
+ * 0 on no addresses written
+ * X on writing X addresses to the RAR table
+ **/
+int txgbe_write_uc_addr_list(struct net_device *netdev, int pool)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ int count = 0;
+
+ /* return ENOMEM indicating insufficient memory for addresses */
+ if (netdev_uc_count(netdev) > txgbe_available_rars(adapter))
+ return -ENOMEM;
+
+ if (!netdev_uc_empty(netdev)) {
+ struct netdev_hw_addr *ha;
+
+ netdev_for_each_uc_addr(ha, netdev) {
+ txgbe_del_mac_filter(adapter, ha->addr, pool);
+ txgbe_add_mac_filter(adapter, ha->addr, pool);
+ count++;
+ }
+ }
+ return count;
+}
+
+int txgbe_add_cloud_switcher(struct txgbe_adapter *adapter, u32 key, u16 pool)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ UNREFERENCED_PARAMETER(pool);
+
+ wr32(hw, TXGBE_PSR_CL_SWC_IDX, 0);
+ wr32(hw, TXGBE_PSR_CL_SWC_KEY, key);
+ wr32(hw, TXGBE_PSR_CL_SWC_CTL,
+ TXGBE_PSR_CL_SWC_CTL_VLD | TXGBE_PSR_CL_SWC_CTL_DST_MSK);
+ wr32(hw, TXGBE_PSR_CL_SWC_VM_L, 0x1);
+ wr32(hw, TXGBE_PSR_CL_SWC_VM_H, 0x0);
+
+ return 0;
+}
+
+int txgbe_del_cloud_switcher(struct txgbe_adapter *adapter, u32 key, u16 pool)
+{
+ /* search table for addr, if found, set to 0 and sync */
+ struct txgbe_hw *hw = &adapter->hw;
+
+ UNREFERENCED_PARAMETER(key);
+ UNREFERENCED_PARAMETER(pool);
+
+ wr32(hw, TXGBE_PSR_CL_SWC_IDX, 0);
+ wr32(hw, TXGBE_PSR_CL_SWC_CTL, 0);
+
+ return 0;
+}
+
+/**
+ * txgbe_set_rx_mode - Unicast, Multicast and Promiscuous mode set
+ * @netdev: network interface device structure
+ *
+ * The set_rx_method entry point is called whenever the unicast/multicast
+ * address list or the network interface flags are updated. This routine is
+ * responsible for configuring the hardware for proper unicast, multicast and
+ * promiscuous mode.
+ **/
+void txgbe_set_rx_mode(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 fctrl, vmolr, vlnctrl;
+ int count;
+
+ /* Check for Promiscuous and All Multicast modes */
+ fctrl = rd32m(hw, TXGBE_PSR_CTL,
+ ~(TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE));
+ vmolr = rd32m(hw, TXGBE_PSR_VM_L2CTL(VMDQ_P(0)),
+ ~(TXGBE_PSR_VM_L2CTL_UPE |
+ TXGBE_PSR_VM_L2CTL_MPE |
+ TXGBE_PSR_VM_L2CTL_ROPE |
+ TXGBE_PSR_VM_L2CTL_ROMPE));
+ vlnctrl = rd32m(hw, TXGBE_PSR_VLAN_CTL,
+ ~(TXGBE_PSR_VLAN_CTL_VFE |
+ TXGBE_PSR_VLAN_CTL_CFIEN));
+
+ /* set all bits that we expect to always be set */
+ fctrl |= TXGBE_PSR_CTL_BAM | TXGBE_PSR_CTL_MFE;
+ vmolr |= TXGBE_PSR_VM_L2CTL_BAM |
+ TXGBE_PSR_VM_L2CTL_AUPE |
+ TXGBE_PSR_VM_L2CTL_VACC;
+ vlnctrl |= TXGBE_PSR_VLAN_CTL_VFE;
+
+ hw->addr_ctrl.user_set_promisc = false;
+ if (netdev->flags & IFF_PROMISC) {
+ hw->addr_ctrl.user_set_promisc = true;
+ fctrl |= (TXGBE_PSR_CTL_UPE | TXGBE_PSR_CTL_MPE);
+ /* pf don't want packets routing to vf, so clear UPE */
+ vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+ vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE;
+ }
+
+ if (netdev->flags & IFF_ALLMULTI) {
+ fctrl |= TXGBE_PSR_CTL_MPE;
+ vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+ }
+
+ /* This is useful for sniffing bad packets. */
+ if (netdev->features & NETIF_F_RXALL) {
+ vmolr |= (TXGBE_PSR_VM_L2CTL_UPE | TXGBE_PSR_VM_L2CTL_MPE);
+ vlnctrl &= ~TXGBE_PSR_VLAN_CTL_VFE;
+ /* receive bad packets */
+ wr32m(hw, TXGBE_RSC_CTL,
+ TXGBE_RSC_CTL_SAVE_MAC_ERR,
+ TXGBE_RSC_CTL_SAVE_MAC_ERR);
+ } else {
+ vmolr |= TXGBE_PSR_VM_L2CTL_ROPE | TXGBE_PSR_VM_L2CTL_ROMPE;
+ }
+
+ /*
+ * Write addresses to available RAR registers, if there is not
+ * sufficient space to store all the addresses then enable
+ * unicast promiscuous mode
+ */
+ count = txgbe_write_uc_addr_list(netdev, VMDQ_P(0));
+ if (count < 0) {
+ vmolr &= ~TXGBE_PSR_VM_L2CTL_ROPE;
+ vmolr |= TXGBE_PSR_VM_L2CTL_UPE;
+ }
+
+ /*
+ * Write addresses to the MTA, if the attempt fails
+ * then we should just turn on promiscuous mode so
+ * that we can at least receive multicast traffic
+ */
+ count = txgbe_write_mc_addr_list(netdev);
+ if (count < 0) {
+ vmolr &= ~TXGBE_PSR_VM_L2CTL_ROMPE;
+ vmolr |= TXGBE_PSR_VM_L2CTL_MPE;
+ }
+
+ wr32(hw, TXGBE_PSR_VLAN_CTL, vlnctrl);
+ wr32(hw, TXGBE_PSR_CTL, fctrl);
+ wr32(hw, TXGBE_PSR_VM_L2CTL(VMDQ_P(0)), vmolr);
+
+ if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
+ txgbe_vlan_strip_enable(adapter);
+ else
+ txgbe_vlan_strip_disable(adapter);
+
+ /* enable cloud switch */
+ if (adapter->flags2 & TXGBE_FLAG2_CLOUD_SWITCH_ENABLED) {
+ txgbe_add_cloud_switcher(adapter, 0x10, 0);
+ }
+}
+
+static void txgbe_napi_enable_all(struct txgbe_adapter *adapter)
+{
+ struct txgbe_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+ q_vector = adapter->q_vector[q_idx];
+ napi_enable(&q_vector->napi);
+ }
+}
+
+static void txgbe_napi_disable_all(struct txgbe_adapter *adapter)
+{
+ struct txgbe_q_vector *q_vector;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < adapter->num_q_vectors; q_idx++) {
+ q_vector = adapter->q_vector[q_idx];
+ napi_disable(&q_vector->napi);
+ }
+}
+
+void txgbe_clear_vxlan_port(struct txgbe_adapter *adapter)
+{
+ adapter->vxlan_port = 0;
+ if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
+ return;
+ wr32(&adapter->hw, TXGBE_CFG_VXLAN, 0);
+}
+
+#define TXGBE_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+ NETIF_F_GSO_GRE_CSUM | \
+ NETIF_F_GSO_IPXIP4 | \
+ NETIF_F_GSO_IPXIP6 | \
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
+static inline unsigned long txgbe_tso_features(void)
+{
+ unsigned long features = 0;
+
+ features |= NETIF_F_TSO;
+ features |= NETIF_F_TSO6;
+ features |= NETIF_F_GSO_PARTIAL | TXGBE_GSO_PARTIAL_FEATURES;
+
+ return features;
+}
+
+static void txgbe_configure_lli(struct txgbe_adapter *adapter)
+{
+ /* lli should only be enabled with MSI-X and MSI */
+ if (!(adapter->flags & TXGBE_FLAG_MSI_ENABLED) &&
+ !(adapter->flags & TXGBE_FLAG_MSIX_ENABLED))
+ return;
+
+ if (adapter->lli_etype) {
+ wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0),
+ (TXGBE_RDB_5T_CTL1_LLI |
+ TXGBE_RDB_5T_CTL1_SIZE_BP));
+ wr32(&adapter->hw, TXGBE_RDB_ETYPE_CLS(0),
+ TXGBE_RDB_ETYPE_CLS_LLI);
+ wr32(&adapter->hw, TXGBE_PSR_ETYPE_SWC(0),
+ (adapter->lli_etype |
+ TXGBE_PSR_ETYPE_SWC_FILTER_EN));
+ }
+
+ if (adapter->lli_port) {
+ wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0),
+ (TXGBE_RDB_5T_CTL1_LLI |
+ TXGBE_RDB_5T_CTL1_SIZE_BP));
+ wr32(&adapter->hw, TXGBE_RDB_5T_CTL0(0),
+ (TXGBE_RDB_5T_CTL0_POOL_MASK_EN |
+ (TXGBE_RDB_5T_CTL0_PRIORITY_MASK <<
+ TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT) |
+ (TXGBE_RDB_5T_CTL0_DEST_PORT_MASK <<
+ TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT)));
+
+ wr32(&adapter->hw, TXGBE_RDB_5T_SDP(0),
+ (adapter->lli_port << 16));
+ }
+
+ if (adapter->lli_size) {
+ wr32(&adapter->hw, TXGBE_RDB_5T_CTL1(0),
+ TXGBE_RDB_5T_CTL1_LLI);
+ wr32m(&adapter->hw, TXGBE_RDB_LLI_THRE,
+ TXGBE_RDB_LLI_THRE_SZ(~0), adapter->lli_size);
+ wr32(&adapter->hw, TXGBE_RDB_5T_CTL0(0),
+ (TXGBE_RDB_5T_CTL0_POOL_MASK_EN |
+ (TXGBE_RDB_5T_CTL0_PRIORITY_MASK <<
+ TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT) |
+ (TXGBE_RDB_5T_CTL0_5TUPLE_MASK_MASK <<
+ TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT)));
+ }
+
+ if (adapter->lli_vlan_pri) {
+ wr32m(&adapter->hw, TXGBE_RDB_LLI_THRE,
+ TXGBE_RDB_LLI_THRE_PRIORITY_EN |
+ TXGBE_RDB_LLI_THRE_UP(~0),
+ TXGBE_RDB_LLI_THRE_PRIORITY_EN |
+ (adapter->lli_vlan_pri << TXGBE_RDB_LLI_THRE_UP_SHIFT));
+ }
+}
+
+/* Additional bittime to account for TXGBE framing */
+#define TXGBE_ETH_FRAMING 20
+
+/*
+ * txgbe_hpbthresh - calculate high water mark for flow control
+ *
+ * @adapter: board private structure to calculate for
+ * @pb - packet buffer to calculate
+ */
+static int txgbe_hpbthresh(struct txgbe_adapter *adapter, int pb)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct net_device *dev = adapter->netdev;
+ int link, tc, kb, marker;
+ u32 dv_id, rx_pba;
+
+ /* Calculate max LAN frame size */
+ tc = link = dev->mtu + ETH_HLEN + ETH_FCS_LEN + TXGBE_ETH_FRAMING;
+
+ /* Calculate delay value for device */
+ dv_id = TXGBE_DV(link, tc);
+
+ /* Loopback switch introduces additional latency */
+ if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)
+ dv_id += TXGBE_B2BT(tc);
+
+ /* Delay value is calculated in bit times convert to KB */
+ kb = TXGBE_BT2KB(dv_id);
+ rx_pba = rd32(hw, TXGBE_RDB_PB_SZ(pb))
+ >> TXGBE_RDB_PB_SZ_SHIFT;
+
+ marker = rx_pba - kb;
+
+ /* It is possible that the packet buffer is not large enough
+ * to provide required headroom. In this case throw an error
+ * to user and a do the best we can.
+ */
+ if (marker < 0) {
+ e_warn(drv, "Packet Buffer(%i) can not provide enough"
+ "headroom to suppport flow control."
+ "Decrease MTU or number of traffic classes\n", pb);
+ marker = tc + 1;
+ }
+
+ return marker;
+}
+
+/*
+ * txgbe_lpbthresh - calculate low water mark for for flow control
+ *
+ * @adapter: board private structure to calculate for
+ * @pb - packet buffer to calculate
+ */
+static int txgbe_lpbthresh(struct txgbe_adapter *adapter, int __maybe_unused pb)
+{
+ struct net_device *dev = adapter->netdev;
+ int tc;
+ u32 dv_id;
+
+ /* Calculate max LAN frame size */
+ tc = dev->mtu + ETH_HLEN + ETH_FCS_LEN;
+
+ /* Calculate delay value for device */
+ dv_id = TXGBE_LOW_DV(tc);
+
+ /* Delay value is calculated in bit times convert to KB */
+ return TXGBE_BT2KB(dv_id);
+}
+
+/*
+ * txgbe_pbthresh_setup - calculate and setup high low water marks
+ */
+static void txgbe_pbthresh_setup(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int num_tc = netdev_get_num_tc(adapter->netdev);
+ int i;
+
+ if (!num_tc)
+ num_tc = 1;
+
+
+ for (i = 0; i < num_tc; i++) {
+ hw->fc.high_water[i] = txgbe_hpbthresh(adapter, i);
+ hw->fc.low_water[i] = txgbe_lpbthresh(adapter, i);
+
+ /* Low water marks must not be larger than high water marks */
+ if (hw->fc.low_water[i] > hw->fc.high_water[i])
+ hw->fc.low_water[i] = 0;
+ }
+
+ for (; i < TXGBE_DCB_MAX_TRAFFIC_CLASS; i++)
+ hw->fc.high_water[i] = 0;
+}
+
+static void txgbe_configure_pb(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int hdrm;
+ int tc = netdev_get_num_tc(adapter->netdev);
+
+ if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE ||
+ adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+ hdrm = 32 << adapter->fdir_pballoc;
+ else
+ hdrm = 0;
+
+ TCALL(hw, mac.ops.setup_rxpba, tc, hdrm, PBA_STRATEGY_EQUAL);
+ txgbe_pbthresh_setup(adapter);
+}
+
+static void txgbe_fdir_filter_restore(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct hlist_node *node;
+ struct txgbe_fdir_filter *filter;
+
+ spin_lock(&adapter->fdir_perfect_lock);
+
+ if (!hlist_empty(&adapter->fdir_filter_list))
+ txgbe_fdir_set_input_mask(hw, &adapter->fdir_mask,
+ adapter->cloud_mode);
+
+ hlist_for_each_entry_safe(filter, node,
+ &adapter->fdir_filter_list, fdir_node) {
+ txgbe_fdir_write_perfect_filter(hw,
+ &filter->filter,
+ filter->sw_idx,
+ (filter->action == TXGBE_RDB_FDIR_DROP_QUEUE) ?
+ TXGBE_RDB_FDIR_DROP_QUEUE :
+ adapter->rx_ring[filter->action]->reg_idx,
+ adapter->cloud_mode);
+ }
+
+ spin_unlock(&adapter->fdir_perfect_lock);
+}
+
+void txgbe_configure_isb(struct txgbe_adapter *adapter)
+{
+ /* set ISB Address */
+ struct txgbe_hw *hw = &adapter->hw;
+
+ wr32(hw, TXGBE_PX_ISB_ADDR_L,
+ adapter->isb_dma & DMA_BIT_MASK(32));
+ wr32(hw, TXGBE_PX_ISB_ADDR_H, adapter->isb_dma >> 32);
+}
+
+void txgbe_configure_port(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 value, i;
+ u8 tcs = netdev_get_num_tc(adapter->netdev);
+
+ if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED) {
+ if (tcs > 4)
+ /* 8 TCs */
+ value = TXGBE_CFG_PORT_CTL_NUM_TC_8 |
+ TXGBE_CFG_PORT_CTL_NUM_VT_16 |
+ TXGBE_CFG_PORT_CTL_DCB_EN;
+ else if (tcs > 1)
+ /* 4 TCs */
+ value = TXGBE_CFG_PORT_CTL_NUM_TC_4 |
+ TXGBE_CFG_PORT_CTL_NUM_VT_32 |
+ TXGBE_CFG_PORT_CTL_DCB_EN;
+ else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+ value = TXGBE_CFG_PORT_CTL_NUM_VT_32;
+ else /* adapter->ring_feature[RING_F_RSS].indices <= 2 */
+ value = TXGBE_CFG_PORT_CTL_NUM_VT_64;
+ } else {
+ if (tcs > 4)
+ value = TXGBE_CFG_PORT_CTL_NUM_TC_8 |
+ TXGBE_CFG_PORT_CTL_DCB_EN;
+ else if (tcs > 1)
+ value = TXGBE_CFG_PORT_CTL_NUM_TC_4 |
+ TXGBE_CFG_PORT_CTL_DCB_EN;
+ else
+ value = 0;
+ }
+
+ value |= TXGBE_CFG_PORT_CTL_D_VLAN | TXGBE_CFG_PORT_CTL_QINQ;
+ wr32m(hw, TXGBE_CFG_PORT_CTL,
+ TXGBE_CFG_PORT_CTL_NUM_TC_MASK |
+ TXGBE_CFG_PORT_CTL_NUM_VT_MASK |
+ TXGBE_CFG_PORT_CTL_DCB_EN |
+ TXGBE_CFG_PORT_CTL_D_VLAN |
+ TXGBE_CFG_PORT_CTL_QINQ,
+ value);
+
+ wr32(hw, TXGBE_CFG_TAG_TPID(0),
+ ETH_P_8021Q | ETH_P_8021AD << 16);
+ adapter->hw.tpid[0] = ETH_P_8021Q;
+ adapter->hw.tpid[1] = ETH_P_8021AD;
+ for (i = 1; i < 4; i++)
+ wr32(hw, TXGBE_CFG_TAG_TPID(i),
+ ETH_P_8021Q | ETH_P_8021Q << 16);
+ for (i = 2; i < 8; i++)
+ adapter->hw.tpid[i] = ETH_P_8021Q;
+}
+
+static void txgbe_configure(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ txgbe_configure_pb(adapter);
+
+ /*
+ * We must restore virtualization before VLANs or else
+ * the VLVF registers will not be populated
+ */
+ txgbe_configure_virtualization(adapter);
+ txgbe_configure_port(adapter);
+
+ txgbe_set_rx_mode(adapter->netdev);
+ txgbe_restore_vlan(adapter);
+
+ TCALL(hw, mac.ops.disable_sec_rx_path);
+
+ if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE) {
+ txgbe_init_fdir_signature(&adapter->hw,
+ adapter->fdir_pballoc);
+ } else if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE) {
+ txgbe_init_fdir_perfect(&adapter->hw,
+ adapter->fdir_pballoc,
+ adapter->cloud_mode);
+ txgbe_fdir_filter_restore(adapter);
+ }
+
+ TCALL(hw, mac.ops.enable_sec_rx_path);
+
+ TCALL(hw, mac.ops.setup_eee,
+ (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE) &&
+ (adapter->flags2 & TXGBE_FLAG2_EEE_ENABLED));
+
+ txgbe_configure_tx(adapter);
+ txgbe_configure_rx(adapter);
+ txgbe_configure_isb(adapter);
+}
+
+static bool txgbe_is_sfp(struct txgbe_hw *hw)
+{
+ switch (TCALL(hw, mac.ops.get_media_type)) {
+ case txgbe_media_type_fiber:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool txgbe_is_backplane(struct txgbe_hw *hw)
+{
+ switch (TCALL(hw, mac.ops.get_media_type)) {
+ case txgbe_media_type_backplane:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/**
+ * txgbe_sfp_link_config - set up SFP+ link
+ * @adapter: pointer to private adapter struct
+ **/
+static void txgbe_sfp_link_config(struct txgbe_adapter *adapter)
+{
+ /*
+ * We are assuming the worst case scenerio here, and that
+ * is that an SFP was inserted/removed after the reset
+ * but before SFP detection was enabled. As such the best
+ * solution is to just start searching as soon as we start
+ */
+
+ adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+ adapter->sfp_poll_time = 0;
+}
+
+/**
+ * txgbe_non_sfp_link_config - set up non-SFP+ link
+ * @hw: pointer to private hardware struct
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int txgbe_non_sfp_link_config(struct txgbe_hw *hw)
+{
+ u32 speed;
+ bool autoneg, link_up = false;
+ u32 ret = TXGBE_ERR_LINK_SETUP;
+
+ ret = TCALL(hw, mac.ops.check_link, &speed, &link_up, false);
+
+ if (ret)
+ goto link_cfg_out;
+
+ if (link_up)
+ return 0;
+
+ if ((hw->subsystem_id & 0xF0) != TXGBE_ID_SFI_XAUI) {
+ /* setup external PHY Mac Interface */
+ mtdSetMacInterfaceControl(&hw->phy_dev, hw->phy.addr, MTD_MAC_TYPE_XAUI,
+ MTD_FALSE, MTD_MAC_SNOOP_OFF,
+ 0, MTD_MAC_SPEED_1000_MBPS,
+ MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED,
+ MTD_TRUE, MTD_TRUE);
+
+ speed = hw->phy.autoneg_advertised;
+ if (!speed)
+ ret = TCALL(hw, mac.ops.get_link_capabilities, &speed,
+ &autoneg);
+ if (ret)
+ goto link_cfg_out;
+ } else {
+ speed = TXGBE_LINK_SPEED_10GB_FULL;
+ autoneg = false;
+ }
+
+ ret = TCALL(hw, mac.ops.setup_link, speed, autoneg);
+
+link_cfg_out:
+ return ret;
+}
+
+/**
+ * txgbe_clear_vf_stats_counters - Clear out VF stats after reset
+ * @adapter: board private structure
+ *
+ * On a reset we need to clear out the VF stats or accounting gets
+ * messed up because they're not clear on read.
+ **/
+static void txgbe_clear_vf_stats_counters(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i;
+
+ for (i = 0; i < adapter->num_vfs; i++) {
+ adapter->vfinfo[i].last_vfstats.gprc =
+ rd32(hw, TXGBE_VX_GPRC(i));
+ adapter->vfinfo[i].saved_rst_vfstats.gprc +=
+ adapter->vfinfo[i].vfstats.gprc;
+ adapter->vfinfo[i].vfstats.gprc = 0;
+ adapter->vfinfo[i].last_vfstats.gptc =
+ rd32(hw, TXGBE_VX_GPTC(i));
+ adapter->vfinfo[i].saved_rst_vfstats.gptc +=
+ adapter->vfinfo[i].vfstats.gptc;
+ adapter->vfinfo[i].vfstats.gptc = 0;
+ adapter->vfinfo[i].last_vfstats.gorc =
+ rd32(hw, TXGBE_VX_GORC_LSB(i));
+ adapter->vfinfo[i].saved_rst_vfstats.gorc +=
+ adapter->vfinfo[i].vfstats.gorc;
+ adapter->vfinfo[i].vfstats.gorc = 0;
+ adapter->vfinfo[i].last_vfstats.gotc =
+ rd32(hw, TXGBE_VX_GOTC_LSB(i));
+ adapter->vfinfo[i].saved_rst_vfstats.gotc +=
+ adapter->vfinfo[i].vfstats.gotc;
+ adapter->vfinfo[i].vfstats.gotc = 0;
+ adapter->vfinfo[i].last_vfstats.mprc =
+ rd32(hw, TXGBE_VX_MPRC(i));
+ adapter->vfinfo[i].saved_rst_vfstats.mprc +=
+ adapter->vfinfo[i].vfstats.mprc;
+ adapter->vfinfo[i].vfstats.mprc = 0;
+ }
+}
+
+static void txgbe_setup_gpie(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 gpie = 0;
+
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED) {
+ gpie = TXGBE_PX_GPIE_MODEL;
+ /*
+ * use EIAM to auto-mask when MSI-X interrupt is asserted
+ * this saves a register write for every interrupt
+ */
+ } else {
+ /* legacy interrupts, use EIAM to auto-mask when reading EICR,
+ * specifically only auto mask tx and rx interrupts */
+ }
+
+ wr32(hw, TXGBE_PX_GPIE, gpie);
+}
+
+static void txgbe_up_complete(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int err;
+ u32 links_reg;
+ u16 value;
+
+ txgbe_get_hw_control(adapter);
+ txgbe_setup_gpie(adapter);
+
+ if (adapter->flags & TXGBE_FLAG_MSIX_ENABLED)
+ txgbe_configure_msix(adapter);
+ else
+ txgbe_configure_msi_and_legacy(adapter);
+
+ /* enable the optics for SFP+ fiber */
+ TCALL(hw, mac.ops.enable_tx_laser);
+
+ smp_mb__before_atomic();
+ clear_bit(__TXGBE_DOWN, &adapter->state);
+ txgbe_napi_enable_all(adapter);
+
+ txgbe_configure_lli(adapter);
+
+ if (txgbe_is_sfp(hw)) {
+ txgbe_sfp_link_config(adapter);
+ } else if (txgbe_is_backplane(hw)) {
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+ txgbe_service_event_schedule(adapter);
+ } else {
+ err = txgbe_non_sfp_link_config(hw);
+ if (err)
+ e_err(probe, "link_config FAILED %d\n", err);
+ }
+
+ links_reg = rd32(hw, TXGBE_CFG_PORT_ST);
+ if (links_reg & TXGBE_CFG_PORT_ST_LINK_UP) {
+ if (links_reg & TXGBE_CFG_PORT_ST_LINK_10G) {
+ wr32(hw, TXGBE_MAC_TX_CFG,
+ (rd32(hw, TXGBE_MAC_TX_CFG) &
+ ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+ TXGBE_MAC_TX_CFG_SPEED_10G);
+ } else if (links_reg & (TXGBE_CFG_PORT_ST_LINK_1G | TXGBE_CFG_PORT_ST_LINK_100M)) {
+ wr32(hw, TXGBE_MAC_TX_CFG,
+ (rd32(hw, TXGBE_MAC_TX_CFG) &
+ ~TXGBE_MAC_TX_CFG_SPEED_MASK) |
+ TXGBE_MAC_TX_CFG_SPEED_1G);
+ }
+ }
+
+ /* clear any pending interrupts, may auto mask */
+ rd32(hw, TXGBE_PX_IC(0));
+ rd32(hw, TXGBE_PX_IC(1));
+ rd32(hw, TXGBE_PX_MISC_IC);
+ txgbe_irq_enable(adapter, true, true);
+
+ /* enable external PHY interrupt */
+ if ((hw->subsystem_id & 0xF0) == TXGBE_ID_XAUI) {
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8011, &value);
+ /* only enable T unit int */
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xf043, 0x1);
+ /* active high */
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 31, 0xf041, 0x0);
+ /* enable AN complete and link status change int */
+ txgbe_write_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8010, 0xc00);
+ }
+
+ /* enable transmits */
+ netif_tx_start_all_queues(adapter->netdev);
+
+ /* bring the link up in the watchdog, this could race with our first
+ * link up interrupt but shouldn't be a problem */
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+ adapter->link_check_timeout = jiffies;
+
+ mod_timer(&adapter->service_timer, jiffies);
+ txgbe_clear_vf_stats_counters(adapter);
+
+ /* Set PF Reset Done bit so PF/VF Mail Ops can work */
+ wr32m(hw, TXGBE_CFG_PORT_CTL,
+ TXGBE_CFG_PORT_CTL_PFRSTD, TXGBE_CFG_PORT_CTL_PFRSTD);
+}
+
+void txgbe_reinit_locked(struct txgbe_adapter *adapter)
+{
+ WARN_ON(in_interrupt());
+ /* put off any impending NetWatchDogTimeout */
+ netif_trans_update(adapter->netdev);
+
+ while (test_and_set_bit(__TXGBE_RESETTING, &adapter->state))
+ usleep_range(1000, 2000);
+ txgbe_down(adapter);
+ /*
+ * If SR-IOV enabled then wait a bit before bringing the adapter
+ * back up to give the VFs time to respond to the reset. The
+ * two second wait is based upon the watchdog timer cycle in
+ * the VF driver.
+ */
+ if (adapter->flags & TXGBE_FLAG_SRIOV_ENABLED)
+ msleep(2000);
+ txgbe_up(adapter);
+ clear_bit(__TXGBE_RESETTING, &adapter->state);
+}
+
+void txgbe_up(struct txgbe_adapter *adapter)
+{
+ /* hardware has been reset, we need to reload some things */
+ txgbe_configure(adapter);
+
+ txgbe_up_complete(adapter);
+}
+
+void txgbe_reset(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct net_device *netdev = adapter->netdev;
+ int err;
+ u8 old_addr[ETH_ALEN];
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return;
+ /* lock SFP init bit to prevent race conditions with the watchdog */
+ while (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+ usleep_range(1000, 2000);
+
+ /* clear all SFP and link config related flags while holding SFP_INIT */
+ adapter->flags2 &= ~(TXGBE_FLAG2_SEARCH_FOR_SFP |
+ TXGBE_FLAG2_SFP_NEEDS_RESET);
+ adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+
+ err = TCALL(hw, mac.ops.init_hw);
+ switch (err) {
+ case 0:
+ case TXGBE_ERR_SFP_NOT_PRESENT:
+ case TXGBE_ERR_SFP_NOT_SUPPORTED:
+ break;
+ case TXGBE_ERR_MASTER_REQUESTS_PENDING:
+ e_dev_err("master disable timed out\n");
+ break;
+ case TXGBE_ERR_EEPROM_VERSION:
+ /* We are running on a pre-production device, log a warning */
+ e_dev_warn("This device is a pre-production adapter/LOM. "
+ "Please be aware there may be issues associated "
+ "with your hardware. If you are experiencing "
+ "problems please contact your hardware "
+ "representative who provided you with this "
+ "hardware.\n");
+ break;
+ default:
+ e_dev_err("Hardware Error: %d\n", err);
+ }
+
+ clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+ /* do not flush user set addresses */
+ memcpy(old_addr, &adapter->mac_table[0].addr, netdev->addr_len);
+ txgbe_flush_sw_mac_table(adapter);
+ txgbe_mac_set_default_filter(adapter, old_addr);
+
+ /* update SAN MAC vmdq pool selection */
+ TCALL(hw, mac.ops.set_vmdq_san_mac, VMDQ_P(0));
+
+ /* Clear saved DMA coalescing values except for watchdog_timer */
+ hw->mac.dmac_config.fcoe_en = false;
+ hw->mac.dmac_config.link_speed = 0;
+ hw->mac.dmac_config.fcoe_tc = 0;
+ hw->mac.dmac_config.num_tcs = 0;
+
+ if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+ txgbe_ptp_reset(adapter);
+}
+
+/**
+ * txgbe_clean_rx_ring - Free Rx Buffers per Queue
+ * @rx_ring: ring to free buffers from
+ **/
+static void txgbe_clean_rx_ring(struct txgbe_ring *rx_ring)
+{
+ struct device *dev = rx_ring->dev;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!rx_ring->rx_buffer_info)
+ return;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < rx_ring->count; i++) {
+ struct txgbe_rx_buffer *rx_buffer = &rx_ring->rx_buffer_info[i];
+ if (rx_buffer->dma) {
+ dma_unmap_single(dev,
+ rx_buffer->dma,
+ rx_ring->rx_buf_len,
+ DMA_FROM_DEVICE);
+ rx_buffer->dma = 0;
+ }
+
+ if (rx_buffer->skb) {
+ struct sk_buff *skb = rx_buffer->skb;
+ if (TXGBE_CB(skb)->dma_released) {
+ dma_unmap_single(dev,
+ TXGBE_CB(skb)->dma,
+ rx_ring->rx_buf_len,
+ DMA_FROM_DEVICE);
+ TXGBE_CB(skb)->dma = 0;
+ TXGBE_CB(skb)->dma_released = false;
+ }
+
+ if (TXGBE_CB(skb)->page_released)
+ dma_unmap_page(dev,
+ TXGBE_CB(skb)->dma,
+ txgbe_rx_bufsz(rx_ring),
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+ rx_buffer->skb = NULL;
+ }
+
+ if (!rx_buffer->page)
+ continue;
+
+ dma_unmap_page(dev, rx_buffer->page_dma,
+ txgbe_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE);
+
+ __free_pages(rx_buffer->page,
+ txgbe_rx_pg_order(rx_ring));
+ rx_buffer->page = NULL;
+ }
+
+ size = sizeof(struct txgbe_rx_buffer) * rx_ring->count;
+ memset(rx_ring->rx_buffer_info, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(rx_ring->desc, 0, rx_ring->size);
+
+ rx_ring->next_to_alloc = 0;
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+}
+
+/**
+ * txgbe_clean_tx_ring - Free Tx Buffers
+ * @tx_ring: ring to be cleaned
+ **/
+static void txgbe_clean_tx_ring(struct txgbe_ring *tx_ring)
+{
+ struct txgbe_tx_buffer *tx_buffer_info;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!tx_ring->tx_buffer_info)
+ return;
+
+ /* Free all the Tx ring sk_buffs */
+ for (i = 0; i < tx_ring->count; i++) {
+ tx_buffer_info = &tx_ring->tx_buffer_info[i];
+ txgbe_unmap_and_free_tx_resource(tx_ring, tx_buffer_info);
+ }
+
+ netdev_tx_reset_queue(txring_txq(tx_ring));
+
+ size = sizeof(struct txgbe_tx_buffer) * tx_ring->count;
+ memset(tx_ring->tx_buffer_info, 0, size);
+
+ /* Zero out the descriptor ring */
+ memset(tx_ring->desc, 0, tx_ring->size);
+}
+
+/**
+ * txgbe_clean_all_rx_rings - Free Rx Buffers for all queues
+ * @adapter: board private structure
+ **/
+static void txgbe_clean_all_rx_rings(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ txgbe_clean_rx_ring(adapter->rx_ring[i]);
+}
+
+/**
+ * txgbe_clean_all_tx_rings - Free Tx Buffers for all queues
+ * @adapter: board private structure
+ **/
+static void txgbe_clean_all_tx_rings(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ txgbe_clean_tx_ring(adapter->tx_ring[i]);
+}
+
+static void txgbe_fdir_filter_exit(struct txgbe_adapter *adapter)
+{
+ struct hlist_node *node;
+ struct txgbe_fdir_filter *filter;
+
+ spin_lock(&adapter->fdir_perfect_lock);
+
+ hlist_for_each_entry_safe(filter, node,
+ &adapter->fdir_filter_list, fdir_node) {
+ hlist_del(&filter->fdir_node);
+ kfree(filter);
+ }
+ adapter->fdir_filter_count = 0;
+
+ spin_unlock(&adapter->fdir_perfect_lock);
+}
+
+void txgbe_disable_device(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ u32 i;
+
+ /* signal that we are down to the interrupt handler */
+ if (test_and_set_bit(__TXGBE_DOWN, &adapter->state))
+ return; /* do nothing if already down */
+
+ txgbe_disable_pcie_master(hw);
+ /* disable receives */
+ TCALL(hw, mac.ops.disable_rx);
+
+ /* disable all enabled rx queues */
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ /* this call also flushes the previous write */
+ txgbe_disable_rx_queue(adapter, adapter->rx_ring[i]);
+
+ netif_tx_stop_all_queues(netdev);
+
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ txgbe_irq_disable(adapter);
+
+ txgbe_napi_disable_all(adapter);
+
+ adapter->flags2 &= ~(TXGBE_FLAG2_FDIR_REQUIRES_REINIT |
+ TXGBE_FLAG2_PF_RESET_REQUESTED |
+ TXGBE_FLAG2_DEV_RESET_REQUESTED |
+ TXGBE_FLAG2_GLOBAL_RESET_REQUESTED);
+ adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+
+ del_timer_sync(&adapter->service_timer);
+
+ if (adapter->num_vfs) {
+ /* Clear EITR Select mapping */
+ wr32(&adapter->hw, TXGBE_PX_ITRSEL, 0);
+
+ /* Mark all the VFs as inactive */
+ for (i = 0 ; i < adapter->num_vfs; i++)
+ adapter->vfinfo[i].clear_to_send = 0;
+
+ /* ping all the active vfs to let them know we are going down */
+
+ /* Disable all VFTE/VFRE TX/RX */
+ }
+
+ if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+ ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))) {
+ /* disable mac transmiter */
+ wr32m(hw, TXGBE_MAC_TX_CFG,
+ TXGBE_MAC_TX_CFG_TE, 0);
+ }
+ /* disable transmits in the hardware now that interrupts are off */
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ u8 reg_idx = adapter->tx_ring[i]->reg_idx;
+ wr32(hw, TXGBE_PX_TR_CFG(reg_idx),
+ TXGBE_PX_TR_CFG_SWFLSH);
+ }
+
+ /* Disable the Tx DMA engine */
+ wr32m(hw, TXGBE_TDM_CTL, TXGBE_TDM_CTL_TE, 0);
+}
+
+
+void txgbe_down(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ txgbe_disable_device(adapter);
+
+ txgbe_reset(adapter);
+
+ if (!(((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)))
+ /* power down the optics for SFP+ fiber */
+ TCALL(&adapter->hw, mac.ops.disable_tx_laser);
+
+ txgbe_clean_all_tx_rings(adapter);
+ txgbe_clean_all_rx_rings(adapter);
+}
+
+/**
+ * txgbe_init_shared_code - Initialize the shared code
+ * @hw: pointer to hardware structure
+ *
+ * This will assign function pointers and assign the MAC type and PHY code.
+ * Does not touch the hardware. This function must be called prior to any
+ * other function in the shared code. The txgbe_hw structure should be
+ * memset to 0 prior to calling this function. The following fields in
+ * hw structure should be filled in prior to calling this function:
+ * hw_addr, back, device_id, vendor_id, subsystem_device_id,
+ * subsystem_vendor_id, and revision_id
+ **/
+s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+{
+ s32 status;
+
+ DEBUGFUNC("\n");
+
+ status = txgbe_init_ops(hw);
+ return status;
+}
+
+/**
+ * txgbe_sw_init - Initialize general software structures (struct txgbe_adapter)
+ * @adapter: board private structure to initialize
+ *
+ * txgbe_sw_init initializes the Adapter private data structure.
+ * Fields are initialized based on PCI device information and
+ * OS network device settings (MTU size).
+ **/
+static const u32 def_rss_key[10] = {
+ 0xE291D73D, 0x1805EC6C, 0x2A94B30D,
+ 0xA54F2BEC, 0xEA49AF7C, 0xE214AD3D, 0xB855AABE,
+ 0x6A3E67EA, 0x14364D17, 0x3BED200D
+};
+
+static int txgbe_sw_init(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct pci_dev *pdev = adapter->pdev;
+ int err;
+ unsigned int fdir;
+
+ /* PCI config space info */
+ hw->vendor_id = pdev->vendor;
+ hw->device_id = pdev->device;
+ pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id);
+ if (hw->revision_id == TXGBE_FAILED_READ_CFG_BYTE &&
+ txgbe_check_cfg_remove(hw, pdev)) {
+ e_err(probe, "read of revision id failed\n");
+ err = -ENODEV;
+ goto out;
+ }
+ hw->subsystem_vendor_id = pdev->subsystem_vendor;
+ hw->subsystem_device_id = pdev->subsystem_device;
+
+ pci_read_config_word(pdev, PCI_SUBSYSTEM_ID, &hw->subsystem_id);
+ if (hw->subsystem_id == TXGBE_FAILED_READ_CFG_WORD) {
+ e_err(probe, "read of subsystem id failed\n");
+ err = -ENODEV;
+ goto out;
+ }
+
+ err = txgbe_init_shared_code(hw);
+ if (err) {
+ e_err(probe, "init_shared_code failed: %d\n", err);
+ goto out;
+ }
+ adapter->mac_table = kzalloc(sizeof(struct txgbe_mac_addr) *
+ hw->mac.num_rar_entries,
+ GFP_ATOMIC);
+ if (!adapter->mac_table) {
+ err = TXGBE_ERR_OUT_OF_MEM;
+ e_err(probe, "mac_table allocation failed: %d\n", err);
+ goto out;
+ }
+
+ memcpy(adapter->rss_key, def_rss_key, sizeof(def_rss_key));
+
+ /* Set common capability flags and settings */
+ adapter->flags2 |= TXGBE_FLAG2_RSC_CAPABLE;
+ fdir = min_t(int, TXGBE_MAX_FDIR_INDICES, num_online_cpus());
+ adapter->ring_feature[RING_F_FDIR].limit = fdir;
+ adapter->max_q_vectors = TXGBE_MAX_MSIX_Q_VECTORS_SAPPHIRE;
+
+ /* Set MAC specific capability flags and exceptions */
+ adapter->flags |= TXGBE_FLAGS_SP_INIT;
+ adapter->flags2 |= TXGBE_FLAG2_TEMP_SENSOR_CAPABLE;
+ hw->phy.smart_speed = txgbe_smart_speed_off;
+ adapter->flags2 |= TXGBE_FLAG2_EEE_CAPABLE;
+
+ /* n-tuple support exists, always init our spinlock */
+ spin_lock_init(&adapter->fdir_perfect_lock);
+
+ TCALL(hw, mbx.ops.init_params);
+
+ /* default flow control settings */
+ hw->fc.requested_mode = txgbe_fc_full;
+ hw->fc.current_mode = txgbe_fc_full; /* init for ethtool output */
+
+ adapter->last_lfc_mode = hw->fc.current_mode;
+ hw->fc.pause_time = TXGBE_DEFAULT_FCPAUSE;
+ hw->fc.send_xon = true;
+ hw->fc.disable_fc_autoneg = false;
+
+ /* set default ring sizes */
+ adapter->tx_ring_count = TXGBE_DEFAULT_TXD;
+ adapter->rx_ring_count = TXGBE_DEFAULT_RXD;
+
+ /* set default work limits */
+ adapter->tx_work_limit = TXGBE_DEFAULT_TX_WORK;
+ adapter->rx_work_limit = TXGBE_DEFAULT_RX_WORK;
+
+ adapter->tx_timeout_recovery_level = 0;
+
+ /* PF holds first pool slot */
+ adapter->num_vmdqs = 1;
+ set_bit(0, &adapter->fwd_bitmask);
+ set_bit(__TXGBE_DOWN, &adapter->state);
+out:
+ return err;
+}
+
+/**
+ * txgbe_setup_tx_resources - allocate Tx resources (Descriptors)
+ * @tx_ring: tx descriptor ring (for a specific queue) to setup
+ *
+ * Return 0 on success, negative on failure
+ **/
+int txgbe_setup_tx_resources(struct txgbe_ring *tx_ring)
+{
+ struct device *dev = tx_ring->dev;
+ int orig_node = dev_to_node(dev);
+ int numa_node = -1;
+ int size;
+
+ size = sizeof(struct txgbe_tx_buffer) * tx_ring->count;
+
+ if (tx_ring->q_vector)
+ numa_node = tx_ring->q_vector->numa_node;
+
+ tx_ring->tx_buffer_info = vzalloc_node(size, numa_node);
+ if (!tx_ring->tx_buffer_info)
+ tx_ring->tx_buffer_info = vzalloc(size);
+ if (!tx_ring->tx_buffer_info)
+ goto err;
+
+ /* round up to nearest 4K */
+ tx_ring->size = tx_ring->count * sizeof(union txgbe_tx_desc);
+ tx_ring->size = ALIGN(tx_ring->size, 4096);
+
+ set_dev_node(dev, numa_node);
+ tx_ring->desc = dma_alloc_coherent(dev,
+ tx_ring->size,
+ &tx_ring->dma,
+ GFP_KERNEL);
+ set_dev_node(dev, orig_node);
+ if (!tx_ring->desc)
+ tx_ring->desc = dma_alloc_coherent(dev, tx_ring->size,
+ &tx_ring->dma, GFP_KERNEL);
+ if (!tx_ring->desc)
+ goto err;
+
+ return 0;
+
+err:
+ vfree(tx_ring->tx_buffer_info);
+ tx_ring->tx_buffer_info = NULL;
+ dev_err(dev, "Unable to allocate memory for the Tx descriptor ring\n");
+ return -ENOMEM;
+}
+
+/**
+ * txgbe_setup_all_tx_resources - allocate all queues Tx resources
+ * @adapter: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_all_tx_resources(struct txgbe_adapter *adapter)
+{
+ int i, err = 0;
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ err = txgbe_setup_tx_resources(adapter->tx_ring[i]);
+ if (!err)
+ continue;
+
+ e_err(probe, "Allocation for Tx Queue %u failed\n", i);
+ goto err_setup_tx;
+ }
+
+ return 0;
+err_setup_tx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ txgbe_free_tx_resources(adapter->tx_ring[i]);
+ return err;
+}
+
+/**
+ * txgbe_setup_rx_resources - allocate Rx resources (Descriptors)
+ * @rx_ring: rx descriptor ring (for a specific queue) to setup
+ *
+ * Returns 0 on success, negative on failure
+ **/
+int txgbe_setup_rx_resources(struct txgbe_ring *rx_ring)
+{
+ struct device *dev = rx_ring->dev;
+ int orig_node = dev_to_node(dev);
+ int numa_node = -1;
+ int size;
+
+ size = sizeof(struct txgbe_rx_buffer) * rx_ring->count;
+
+ if (rx_ring->q_vector)
+ numa_node = rx_ring->q_vector->numa_node;
+
+ rx_ring->rx_buffer_info = vzalloc_node(size, numa_node);
+ if (!rx_ring->rx_buffer_info)
+ rx_ring->rx_buffer_info = vzalloc(size);
+ if (!rx_ring->rx_buffer_info)
+ goto err;
+
+ /* Round up to nearest 4K */
+ rx_ring->size = rx_ring->count * sizeof(union txgbe_rx_desc);
+ rx_ring->size = ALIGN(rx_ring->size, 4096);
+
+ set_dev_node(dev, numa_node);
+ rx_ring->desc = dma_alloc_coherent(dev,
+ rx_ring->size,
+ &rx_ring->dma,
+ GFP_KERNEL);
+ set_dev_node(dev, orig_node);
+ if (!rx_ring->desc)
+ rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
+ &rx_ring->dma, GFP_KERNEL);
+ if (!rx_ring->desc)
+ goto err;
+
+ return 0;
+err:
+ vfree(rx_ring->rx_buffer_info);
+ rx_ring->rx_buffer_info = NULL;
+ dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n");
+ return -ENOMEM;
+}
+
+/**
+ * txgbe_setup_all_rx_resources - allocate all queues Rx resources
+ * @adapter: board private structure
+ *
+ * If this function returns with an error, then it's possible one or
+ * more of the rings is populated (while the rest are not). It is the
+ * callers duty to clean those orphaned rings.
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_all_rx_resources(struct txgbe_adapter *adapter)
+{
+ int i, err = 0;
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ err = txgbe_setup_rx_resources(adapter->rx_ring[i]);
+ if (!err) {
+ continue;
+ }
+
+ e_err(probe, "Allocation for Rx Queue %u failed\n", i);
+ goto err_setup_rx;
+ }
+
+ return 0;
+err_setup_rx:
+ /* rewind the index freeing the rings as we go */
+ while (i--)
+ txgbe_free_rx_resources(adapter->rx_ring[i]);
+ return err;
+}
+
+/**
+ * txgbe_setup_isb_resources - allocate interrupt status resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static int txgbe_setup_isb_resources(struct txgbe_adapter *adapter)
+{
+ struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+ adapter->isb_mem = dma_alloc_coherent(dev,
+ sizeof(u32) * TXGBE_ISB_MAX,
+ &adapter->isb_dma,
+ GFP_KERNEL);
+ if (!adapter->isb_mem)
+ return -ENOMEM;
+ memset(adapter->isb_mem, 0, sizeof(u32) * TXGBE_ISB_MAX);
+ return 0;
+}
+
+/**
+ * txgbe_free_isb_resources - allocate all queues Rx resources
+ * @adapter: board private structure
+ *
+ * Return 0 on success, negative on failure
+ **/
+static void txgbe_free_isb_resources(struct txgbe_adapter *adapter)
+{
+ struct device *dev = pci_dev_to_dev(adapter->pdev);
+
+ dma_free_coherent(dev, sizeof(u32) * TXGBE_ISB_MAX,
+ adapter->isb_mem, adapter->isb_dma);
+ adapter->isb_mem = NULL;
+}
+
+/**
+ * txgbe_free_tx_resources - Free Tx Resources per Queue
+ * @tx_ring: Tx descriptor ring for a specific queue
+ *
+ * Free all transmit software resources
+ **/
+void txgbe_free_tx_resources(struct txgbe_ring *tx_ring)
+{
+ txgbe_clean_tx_ring(tx_ring);
+
+ vfree(tx_ring->tx_buffer_info);
+ tx_ring->tx_buffer_info = NULL;
+
+ /* if not set, then don't free */
+ if (!tx_ring->desc)
+ return;
+
+ dma_free_coherent(tx_ring->dev, tx_ring->size,
+ tx_ring->desc, tx_ring->dma);
+ tx_ring->desc = NULL;
+}
+
+/**
+ * txgbe_free_all_tx_resources - Free Tx Resources for All Queues
+ * @adapter: board private structure
+ *
+ * Free all transmit software resources
+ **/
+static void txgbe_free_all_tx_resources(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ txgbe_free_tx_resources(adapter->tx_ring[i]);
+}
+
+/**
+ * txgbe_free_rx_resources - Free Rx Resources
+ * @rx_ring: ring to clean the resources from
+ *
+ * Free all receive software resources
+ **/
+void txgbe_free_rx_resources(struct txgbe_ring *rx_ring)
+{
+ txgbe_clean_rx_ring(rx_ring);
+
+ vfree(rx_ring->rx_buffer_info);
+ rx_ring->rx_buffer_info = NULL;
+
+ /* if not set, then don't free */
+ if (!rx_ring->desc)
+ return;
+
+ dma_free_coherent(rx_ring->dev, rx_ring->size,
+ rx_ring->desc, rx_ring->dma);
+
+ rx_ring->desc = NULL;
+}
+
+/**
+ * txgbe_free_all_rx_resources - Free Rx Resources for All Queues
+ * @adapter: board private structure
+ *
+ * Free all receive software resources
+ **/
+static void txgbe_free_all_rx_resources(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_rx_queues; i++)
+ txgbe_free_rx_resources(adapter->rx_ring[i]);
+}
+
+/**
+ * txgbe_change_mtu - Change the Maximum Transfer Unit
+ * @netdev: network interface device structure
+ * @new_mtu: new value for maximum frame size
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int txgbe_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ if ((new_mtu < 68) || (new_mtu > 9414))
+ return -EINVAL;
+
+ /*
+ * we cannot allow legacy VFs to enable their receive
+ * paths when MTU greater than 1500 is configured. So display a
+ * warning that legacy VFs will be disabled.
+ */
+ if ((adapter->flags & TXGBE_FLAG_SRIOV_ENABLED) &&
+ (new_mtu > ETH_DATA_LEN))
+ e_warn(probe, "Setting MTU > 1500 will disable legacy VFs\n");
+
+ e_info(probe, "changing MTU from %d to %d\n", netdev->mtu, new_mtu);
+
+ /* must set new MTU before calling down or up */
+ netdev->mtu = new_mtu;
+
+ if (netif_running(netdev))
+ txgbe_reinit_locked(adapter);
+
+ return 0;
+}
+
+/**
+ * txgbe_open - Called when a network interface is made active
+ * @netdev: network interface device structure
+ *
+ * Returns 0 on success, negative value on failure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP). At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the watchdog timer is started,
+ * and the stack is notified that the interface is ready.
+ **/
+int txgbe_open(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ int err;
+
+ /*special for backplane flow*/
+ adapter->flags2 &= ~TXGBE_FLAG2_KR_PRO_DOWN;
+
+ /* disallow open during test */
+ if (test_bit(__TXGBE_TESTING, &adapter->state))
+ return -EBUSY;
+
+ netif_carrier_off(netdev);
+
+ /* allocate transmit descriptors */
+ err = txgbe_setup_all_tx_resources(adapter);
+ if (err)
+ goto err_setup_tx;
+
+ /* allocate receive descriptors */
+ err = txgbe_setup_all_rx_resources(adapter);
+ if (err)
+ goto err_setup_rx;
+
+ err = txgbe_setup_isb_resources(adapter);
+ if (err)
+ goto err_req_isb;
+
+ txgbe_configure(adapter);
+
+ err = txgbe_request_irq(adapter);
+ if (err)
+ goto err_req_irq;
+
+ /* Notify the stack of the actual queue counts. */
+ err = netif_set_real_num_tx_queues(netdev, adapter->num_vmdqs > 1
+ ? adapter->queues_per_pool
+ : adapter->num_tx_queues);
+ if (err)
+ goto err_set_queues;
+
+ err = netif_set_real_num_rx_queues(netdev, adapter->num_vmdqs > 1
+ ? adapter->queues_per_pool
+ : adapter->num_rx_queues);
+ if (err)
+ goto err_set_queues;
+
+ txgbe_ptp_init(adapter);
+
+ txgbe_up_complete(adapter);
+
+ txgbe_clear_vxlan_port(adapter);
+ udp_tunnel_get_rx_info(netdev);
+
+ return 0;
+
+err_set_queues:
+ txgbe_free_irq(adapter);
+err_req_irq:
+ txgbe_free_isb_resources(adapter);
+err_req_isb:
+ txgbe_free_all_rx_resources(adapter);
+
+err_setup_rx:
+ txgbe_free_all_tx_resources(adapter);
+err_setup_tx:
+ txgbe_reset(adapter);
+
+ return err;
+}
+
+/**
+ * txgbe_close_suspend - actions necessary to both suspend and close flows
+ * @adapter: the private adapter struct
+ *
+ * This function should contain the necessary work common to both suspending
+ * and closing of the device.
+ */
+static void txgbe_close_suspend(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ txgbe_ptp_suspend(adapter);
+
+ txgbe_disable_device(adapter);
+ if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+ TCALL(hw, mac.ops.disable_tx_laser);
+ txgbe_clean_all_tx_rings(adapter);
+ txgbe_clean_all_rx_rings(adapter);
+
+ txgbe_free_irq(adapter);
+
+ txgbe_free_isb_resources(adapter);
+ txgbe_free_all_rx_resources(adapter);
+ txgbe_free_all_tx_resources(adapter);
+}
+
+/**
+ * txgbe_close - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * Returns 0, this is not allowed to fail
+ *
+ * The close entry point is called when an interface is de-activated
+ * by the OS. The hardware is still under the drivers control, but
+ * needs to be disabled. A global MAC reset is issued to stop the
+ * hardware, and all transmit and receive resources are freed.
+ **/
+int txgbe_close(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 ||
+ hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) {
+ txgbe_bp_close_protect(adapter);
+ }
+
+ txgbe_ptp_stop(adapter);
+
+ txgbe_down(adapter);
+ txgbe_free_irq(adapter);
+
+ txgbe_free_isb_resources(adapter);
+ txgbe_free_all_rx_resources(adapter);
+ txgbe_free_all_tx_resources(adapter);
+
+ txgbe_fdir_filter_exit(adapter);
+
+ txgbe_release_hw_control(adapter);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int txgbe_resume(struct pci_dev *pdev)
+{
+ struct txgbe_adapter *adapter;
+ struct net_device *netdev;
+ u32 err;
+
+ adapter = pci_get_drvdata(pdev);
+ netdev = adapter->netdev;
+ adapter->hw.hw_addr = adapter->io_addr;
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ /*
+ * pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
+
+ err = pci_enable_device_mem(pdev);
+ if (err) {
+ e_dev_err("Cannot enable PCI device from suspend\n");
+ return err;
+ }
+ smp_mb__before_atomic();
+ clear_bit(__TXGBE_DISABLED, &adapter->state);
+ pci_set_master(pdev);
+
+ pci_wake_from_d3(pdev, false);
+
+ txgbe_reset(adapter);
+
+ rtnl_lock();
+
+ err = txgbe_init_interrupt_scheme(adapter);
+ if (!err && netif_running(netdev))
+ err = txgbe_open(netdev);
+
+ rtnl_unlock();
+
+ if (err)
+ return err;
+
+ netif_device_attach(netdev);
+
+ return 0;
+}
+#endif /* CONFIG_PM */
+/*
+ * __txgbe_shutdown is not used when power management
+ * is disabled on older kernels (<2.6.12). causes a compile
+ * warning/error, because it is defined and not used.
+ */
+static int __txgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
+{
+ struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+ struct net_device *netdev = adapter->netdev;
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 wufc = adapter->wol;
+#ifdef CONFIG_PM
+ int retval = 0;
+#endif
+
+ netif_device_detach(netdev);
+
+ rtnl_lock();
+ if (netif_running(netdev))
+ txgbe_close_suspend(adapter);
+ rtnl_unlock();
+
+ txgbe_clear_interrupt_scheme(adapter);
+
+#ifdef CONFIG_PM
+ retval = pci_save_state(pdev);
+ if (retval)
+ return retval;
+#endif
+
+ /* this won't stop link of managebility or WoL is enabled */
+ txgbe_stop_mac_link_on_d3(hw);
+
+ if (wufc) {
+ txgbe_set_rx_mode(netdev);
+ txgbe_configure_rx(adapter);
+ /* enable the optics for SFP+ fiber as we can WoL */
+ TCALL(hw, mac.ops.enable_tx_laser);
+
+ /* turn on all-multi mode if wake on multicast is enabled */
+ if (wufc & TXGBE_PSR_WKUP_CTL_MC) {
+ wr32m(hw, TXGBE_PSR_CTL,
+ TXGBE_PSR_CTL_MPE, TXGBE_PSR_CTL_MPE);
+ }
+
+ pci_clear_master(adapter->pdev);
+ wr32(hw, TXGBE_PSR_WKUP_CTL, wufc);
+ } else {
+ wr32(hw, TXGBE_PSR_WKUP_CTL, 0);
+ }
+
+ pci_wake_from_d3(pdev, !!wufc);
+
+ *enable_wake = !!wufc;
+ txgbe_release_hw_control(adapter);
+
+ if (!test_and_set_bit(__TXGBE_DISABLED, &adapter->state))
+ pci_disable_device(pdev);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int txgbe_suspend(struct pci_dev *pdev,
+ pm_message_t __always_unused state)
+{
+ int retval;
+ bool wake;
+
+ retval = __txgbe_shutdown(pdev, &wake);
+ if (retval)
+ return retval;
+
+ if (wake) {
+ pci_prepare_to_sleep(pdev);
+ } else {
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
+
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static void txgbe_shutdown(struct pci_dev *pdev)
+{
+ bool wake;
+
+ __txgbe_shutdown(pdev, &wake);
+
+ if (system_state == SYSTEM_POWER_OFF) {
+ pci_wake_from_d3(pdev, wake);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
+}
+
+/**
+ * txgbe_get_stats64 - Get System Network Statistics
+ * @netdev: network interface device structure
+ * @stats: storage space for 64bit statistics
+ *
+ * Returns 64bit statistics, for use in the ndo_get_stats64 callback. This
+ * function replaces txgbe_get_stats for kernels which support it.
+ */
+static void txgbe_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ int i;
+
+ rcu_read_lock();
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ struct txgbe_ring *ring = READ_ONCE(adapter->rx_ring[i]);
+ u64 bytes, packets;
+ unsigned int start;
+
+ if (ring) {
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp,
+ start));
+ stats->rx_packets += packets;
+ stats->rx_bytes += bytes;
+ }
+ }
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *ring = READ_ONCE(adapter->tx_ring[i]);
+ u64 bytes, packets;
+ unsigned int start;
+
+ if (ring) {
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ packets = ring->stats.packets;
+ bytes = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp,
+ start));
+ stats->tx_packets += packets;
+ stats->tx_bytes += bytes;
+ }
+ }
+ rcu_read_unlock();
+ /* following stats updated by txgbe_watchdog_task() */
+ stats->multicast = netdev->stats.multicast;
+ stats->rx_errors = netdev->stats.rx_errors;
+ stats->rx_length_errors = netdev->stats.rx_length_errors;
+ stats->rx_crc_errors = netdev->stats.rx_crc_errors;
+ stats->rx_missed_errors = netdev->stats.rx_missed_errors;
+}
+
+/**
+ * txgbe_update_stats - Update the board statistics counters.
+ * @adapter: board private structure
+ **/
+void txgbe_update_stats(struct txgbe_adapter *adapter)
+{
+ struct net_device_stats *net_stats = &adapter->netdev->stats;
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_hw_stats *hwstats = &adapter->stats;
+ u64 total_mpc = 0;
+ u32 i, missed_rx = 0, mpc, bprc, lxon, lxoff;
+ u64 non_eop_descs = 0, restart_queue = 0, tx_busy = 0;
+ u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0;
+ u64 bytes = 0, packets = 0, hw_csum_rx_error = 0;
+ u64 hw_csum_rx_good = 0;
+
+ if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+ test_bit(__TXGBE_RESETTING, &adapter->state))
+ return;
+
+ if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED) {
+ u64 rsc_count = 0;
+ u64 rsc_flush = 0;
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ rsc_count += adapter->rx_ring[i]->rx_stats.rsc_count;
+ rsc_flush += adapter->rx_ring[i]->rx_stats.rsc_flush;
+ }
+ adapter->rsc_total_count = rsc_count;
+ adapter->rsc_total_flush = rsc_flush;
+ }
+
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ struct txgbe_ring *rx_ring = adapter->rx_ring[i];
+ non_eop_descs += rx_ring->rx_stats.non_eop_descs;
+ alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed;
+ alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed;
+ hw_csum_rx_error += rx_ring->rx_stats.csum_err;
+ hw_csum_rx_good += rx_ring->rx_stats.csum_good_cnt;
+ bytes += rx_ring->stats.bytes;
+ packets += rx_ring->stats.packets;
+
+ }
+ adapter->non_eop_descs = non_eop_descs;
+ adapter->alloc_rx_page_failed = alloc_rx_page_failed;
+ adapter->alloc_rx_buff_failed = alloc_rx_buff_failed;
+ adapter->hw_csum_rx_error = hw_csum_rx_error;
+ adapter->hw_csum_rx_good = hw_csum_rx_good;
+ net_stats->rx_bytes = bytes;
+ net_stats->rx_packets = packets;
+
+ bytes = 0;
+ packets = 0;
+ /* gather some stats to the adapter struct that are per queue */
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+ restart_queue += tx_ring->tx_stats.restart_queue;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ bytes += tx_ring->stats.bytes;
+ packets += tx_ring->stats.packets;
+ }
+ adapter->restart_queue = restart_queue;
+ adapter->tx_busy = tx_busy;
+ net_stats->tx_bytes = bytes;
+ net_stats->tx_packets = packets;
+
+ hwstats->crcerrs += rd32(hw, TXGBE_RX_CRC_ERROR_FRAMES_LOW);
+
+ /* 8 register reads */
+ for (i = 0; i < 8; i++) {
+ /* for packet buffers not used, the register should read 0 */
+ mpc = rd32(hw, TXGBE_RDB_MPCNT(i));
+ missed_rx += mpc;
+ hwstats->mpc[i] += mpc;
+ total_mpc += hwstats->mpc[i];
+ hwstats->pxontxc[i] += rd32(hw, TXGBE_RDB_PXONTXC(i));
+ hwstats->pxofftxc[i] +=
+ rd32(hw, TXGBE_RDB_PXOFFTXC(i));
+ hwstats->pxonrxc[i] += rd32(hw, TXGBE_MAC_PXONRXC(i));
+ }
+
+ hwstats->gprc += rd32(hw, TXGBE_PX_GPRC);
+
+ txgbe_update_xoff_received(adapter);
+
+ hwstats->o2bgptc += rd32(hw, TXGBE_TDM_OS2BMC_CNT);
+ if (txgbe_check_mng_access(&adapter->hw)) {
+ hwstats->o2bspc += rd32(hw, TXGBE_MNG_OS2BMC_CNT);
+ hwstats->b2ospc += rd32(hw, TXGBE_MNG_BMC2OS_CNT);
+ }
+ hwstats->b2ogprc += rd32(hw, TXGBE_RDM_BMC2OS_CNT);
+ hwstats->gorc += rd32(hw, TXGBE_PX_GORC_LSB);
+ hwstats->gorc += (u64)rd32(hw, TXGBE_PX_GORC_MSB) << 32;
+
+ hwstats->gotc += rd32(hw, TXGBE_PX_GOTC_LSB);
+ hwstats->gotc += (u64)rd32(hw, TXGBE_PX_GOTC_MSB) << 32;
+
+
+ adapter->hw_rx_no_dma_resources +=
+ rd32(hw, TXGBE_RDM_DRP_PKT);
+ hwstats->lxonrxc += rd32(hw, TXGBE_MAC_LXONRXC);
+
+ hwstats->fdirmatch += rd32(hw, TXGBE_RDB_FDIR_MATCH);
+ hwstats->fdirmiss += rd32(hw, TXGBE_RDB_FDIR_MISS);
+
+ bprc = rd32(hw, TXGBE_RX_BC_FRAMES_GOOD_LOW);
+ hwstats->bprc += bprc;
+ hwstats->mprc = 0;
+
+ for (i = 0; i < 128; i++)
+ hwstats->mprc += rd32(hw, TXGBE_PX_MPRC(i));
+
+
+ hwstats->roc += rd32(hw, TXGBE_RX_OVERSIZE_FRAMES_GOOD);
+ hwstats->rlec += rd32(hw, TXGBE_RX_LEN_ERROR_FRAMES_LOW);
+ lxon = rd32(hw, TXGBE_RDB_LXONTXC);
+ hwstats->lxontxc += lxon;
+ lxoff = rd32(hw, TXGBE_RDB_LXOFFTXC);
+ hwstats->lxofftxc += lxoff;
+
+ hwstats->gptc += rd32(hw, TXGBE_PX_GPTC);
+ hwstats->mptc += rd32(hw, TXGBE_TX_MC_FRAMES_GOOD_LOW);
+ hwstats->ruc += rd32(hw, TXGBE_RX_UNDERSIZE_FRAMES_GOOD);
+ hwstats->tpr += rd32(hw, TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW);
+ hwstats->bptc += rd32(hw, TXGBE_TX_BC_FRAMES_GOOD_LOW);
+ /* Fill out the OS statistics structure */
+ net_stats->multicast = hwstats->mprc;
+
+ /* Rx Errors */
+ net_stats->rx_errors = hwstats->crcerrs +
+ hwstats->rlec;
+ net_stats->rx_dropped = 0;
+ net_stats->rx_length_errors = hwstats->rlec;
+ net_stats->rx_crc_errors = hwstats->crcerrs;
+ net_stats->rx_missed_errors = total_mpc;
+
+ /*
+ * VF Stats Collection - skip while resetting because these
+ * are not clear on read and otherwise you'll sometimes get
+ * crazy values.
+ */
+ if (!test_bit(__TXGBE_RESETTING, &adapter->state)) {
+ for (i = 0; i < adapter->num_vfs; i++) {
+ UPDATE_VF_COUNTER_32bit(TXGBE_VX_GPRC(i), \
+ adapter->vfinfo[i].last_vfstats.gprc, \
+ adapter->vfinfo[i].vfstats.gprc);
+ UPDATE_VF_COUNTER_32bit(TXGBE_VX_GPTC(i), \
+ adapter->vfinfo[i].last_vfstats.gptc, \
+ adapter->vfinfo[i].vfstats.gptc);
+ UPDATE_VF_COUNTER_36bit(TXGBE_VX_GORC_LSB(i), \
+ TXGBE_VX_GORC_MSB(i), \
+ adapter->vfinfo[i].last_vfstats.gorc, \
+ adapter->vfinfo[i].vfstats.gorc);
+ UPDATE_VF_COUNTER_36bit(TXGBE_VX_GOTC_LSB(i), \
+ TXGBE_VX_GOTC_MSB(i), \
+ adapter->vfinfo[i].last_vfstats.gotc, \
+ adapter->vfinfo[i].vfstats.gotc);
+ UPDATE_VF_COUNTER_32bit(TXGBE_VX_MPRC(i), \
+ adapter->vfinfo[i].last_vfstats.mprc, \
+ adapter->vfinfo[i].vfstats.mprc);
+ }
+ }
+}
+
+/**
+ * txgbe_fdir_reinit_subtask - worker thread to reinit FDIR filter table
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_fdir_reinit_subtask(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ int i;
+
+ if (!(adapter->flags2 & TXGBE_FLAG2_FDIR_REQUIRES_REINIT))
+ return;
+
+ adapter->flags2 &= ~TXGBE_FLAG2_FDIR_REQUIRES_REINIT;
+
+ /* if interface is down do nothing */
+ if (test_bit(__TXGBE_DOWN, &adapter->state))
+ return;
+
+ /* do nothing if we are not using signature filters */
+ if (!(adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE))
+ return;
+
+ adapter->fdir_overflow++;
+
+ if (txgbe_reinit_fdir_tables(hw) == 0) {
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ set_bit(__TXGBE_TX_FDIR_INIT_DONE,
+ &(adapter->tx_ring[i]->state));
+ /* re-enable flow director interrupts */
+ wr32m(hw, TXGBE_PX_MISC_IEN,
+ TXGBE_PX_MISC_IEN_FLOW_DIR, TXGBE_PX_MISC_IEN_FLOW_DIR);
+ } else {
+ e_err(probe, "failed to finish FDIR re-initialization, "
+ "ignored adding FDIR ATR filters\n");
+ }
+}
+
+/**
+ * txgbe_check_hang_subtask - check for hung queues and dropped interrupts
+ * @adapter - pointer to the device adapter structure
+ *
+ * This function serves two purposes. First it strobes the interrupt lines
+ * in order to make certain interrupts are occurring. Secondly it sets the
+ * bits needed to check for TX hangs. As a result we should immediately
+ * determine if a hang has occurred.
+ */
+static void txgbe_check_hang_subtask(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ /* If we're down or resetting, just bail */
+ if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+ test_bit(__TXGBE_REMOVING, &adapter->state) ||
+ test_bit(__TXGBE_RESETTING, &adapter->state))
+ return;
+
+ /* Force detection of hung controller */
+ if (netif_carrier_ok(adapter->netdev)) {
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ set_check_for_tx_hang(adapter->tx_ring[i]);
+ }
+
+}
+
+/**
+ * txgbe_watchdog_update_link - update the link status
+ * @adapter - pointer to the device adapter structure
+ * @link_speed - pointer to a u32 to store the link_speed
+ **/
+static void txgbe_watchdog_update_link(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 link_speed = adapter->link_speed;
+ bool link_up = adapter->link_up;
+// bool pfc_en = adapter->dcb_cfg.pfc_mode_enable;
+ u32 reg;
+ u32 i = 1;
+
+ if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE))
+ return;
+
+ link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+ link_up = true;
+ TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+
+ if (link_up || time_after(jiffies, (adapter->link_check_timeout +
+ TXGBE_TRY_LINK_TIMEOUT))) {
+ adapter->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+ }
+
+ for (i = 0; i < 3; i++) {
+ TCALL(hw, mac.ops.check_link, &link_speed, &link_up, false);
+ msleep(1);
+ }
+
+ if (link_up && !((adapter->flags & TXGBE_FLAG_DCB_ENABLED))) {
+ TCALL(hw, mac.ops.fc_enable);
+ txgbe_set_rx_drop_en(adapter);
+ }
+
+ if (link_up) {
+ adapter->last_rx_ptp_check = jiffies;
+
+ if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+ txgbe_ptp_start_cyclecounter(adapter);
+
+ if (link_speed & TXGBE_LINK_SPEED_10GB_FULL) {
+ wr32(hw, TXGBE_MAC_TX_CFG,
+ (rd32(hw, TXGBE_MAC_TX_CFG) &
+ ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+ TXGBE_MAC_TX_CFG_SPEED_10G);
+ } else if (link_speed & (TXGBE_LINK_SPEED_1GB_FULL |
+ TXGBE_LINK_SPEED_100_FULL | TXGBE_LINK_SPEED_10_FULL)) {
+ wr32(hw, TXGBE_MAC_TX_CFG,
+ (rd32(hw, TXGBE_MAC_TX_CFG) &
+ ~TXGBE_MAC_TX_CFG_SPEED_MASK) | TXGBE_MAC_TX_CFG_TE |
+ TXGBE_MAC_TX_CFG_SPEED_1G);
+ }
+
+ /* Re configure MAC RX */
+ reg = rd32(hw, TXGBE_MAC_RX_CFG);
+ wr32(hw, TXGBE_MAC_RX_CFG, reg);
+ wr32(hw, TXGBE_MAC_PKT_FLT, TXGBE_MAC_PKT_FLT_PR);
+ reg = rd32(hw, TXGBE_MAC_WDG_TIMEOUT);
+ wr32(hw, TXGBE_MAC_WDG_TIMEOUT, reg);
+ }
+
+ adapter->link_up = link_up;
+ adapter->link_speed = link_speed;
+ if (hw->mac.ops.dmac_config && hw->mac.dmac_config.watchdog_timer) {
+ u8 num_tcs = netdev_get_num_tc(adapter->netdev);
+
+ if (hw->mac.dmac_config.link_speed != link_speed ||
+ hw->mac.dmac_config.num_tcs != num_tcs) {
+ hw->mac.dmac_config.link_speed = link_speed;
+ hw->mac.dmac_config.num_tcs = num_tcs;
+ TCALL(hw, mac.ops.dmac_config);
+ }
+ }
+}
+
+static void txgbe_update_default_up(struct txgbe_adapter *adapter)
+{
+ u8 up = 0;
+
+ adapter->default_up = up;
+}
+
+/**
+ * txgbe_watchdog_link_is_up - update netif_carrier status and
+ * print link up message
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_link_is_up(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 link_speed = adapter->link_speed;
+ bool flow_rx, flow_tx;
+
+ /* only continue if link was previously down */
+ if (netif_carrier_ok(netdev))
+ return;
+
+ adapter->flags2 &= ~TXGBE_FLAG2_SEARCH_FOR_SFP;
+
+ /* flow_rx, flow_tx report link flow control status */
+ flow_rx = (rd32(hw, TXGBE_MAC_RX_FLOW_CTRL) & 0x101) == 0x1;
+ flow_tx = !!(TXGBE_RDB_RFCC_RFCE_802_3X &
+ rd32(hw, TXGBE_RDB_RFCC));
+
+ e_info(drv, "NIC Link is Up %s, Flow Control: %s\n",
+ (link_speed == TXGBE_LINK_SPEED_10GB_FULL ?
+ "10 Gbps" :
+ (link_speed == TXGBE_LINK_SPEED_1GB_FULL ?
+ "1 Gbps" :
+ (link_speed == TXGBE_LINK_SPEED_100_FULL ?
+ "100 Mbps" :
+ (link_speed == TXGBE_LINK_SPEED_10_FULL ?
+ "10 Mbps" :
+ "unknown speed")))),
+ ((flow_rx && flow_tx) ? "RX/TX" :
+ (flow_rx ? "RX" :
+ (flow_tx ? "TX" : "None"))));
+
+ netif_carrier_on(netdev);
+ netif_tx_wake_all_queues(netdev);
+
+ /* update the default user priority for VFs */
+ txgbe_update_default_up(adapter);
+
+ /* ping all the active vfs to let them know link has changed */
+}
+
+/**
+ * txgbe_watchdog_link_is_down - update netif_carrier status and
+ * print link down message
+ * @adapter - pointer to the adapter structure
+ **/
+static void txgbe_watchdog_link_is_down(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct txgbe_hw *hw = &adapter->hw;
+ adapter->link_up = false;
+ adapter->link_speed = 0;
+
+ /* only continue if link was up previously */
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 ||
+ hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) {
+ txgbe_bp_down_event(adapter);
+ }
+
+ if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+ txgbe_ptp_start_cyclecounter(adapter);
+
+ e_info(drv, "NIC Link is Down\n");
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+
+ /* ping all the active vfs to let them know link has changed */
+
+}
+
+static bool txgbe_ring_tx_pending(struct txgbe_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ struct txgbe_ring *tx_ring = adapter->tx_ring[i];
+
+ if (tx_ring->next_to_use != tx_ring->next_to_clean)
+ return true;
+ }
+
+ return false;
+}
+
+static bool txgbe_vf_tx_pending(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+
+ u32 i, j;
+
+ if (!adapter->num_vfs)
+ return false;
+
+ for (i = 0; i < adapter->num_vfs; i++) {
+ for (j = 0; j < q_per_pool; j++) {
+ u32 h, t;
+
+ h = rd32(hw,
+ TXGBE_PX_TR_RPn(q_per_pool, i, j));
+ t = rd32(hw,
+ TXGBE_PX_TR_WPn(q_per_pool, i, j));
+
+ if (h != t)
+ return true;
+ }
+ }
+
+ return false;
+}
+
+/**
+ * txgbe_watchdog_flush_tx - flush queues on link down
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_flush_tx(struct txgbe_adapter *adapter)
+{
+ if (!netif_carrier_ok(adapter->netdev)) {
+ if (txgbe_ring_tx_pending(adapter) ||
+ txgbe_vf_tx_pending(adapter)) {
+ /* We've lost link, so the controller stops DMA,
+ * but we've got queued Tx work that's never going
+ * to get done, so reset controller to flush Tx.
+ * (Do the reset outside of interrupt context).
+ */
+ e_warn(drv, "initiating reset due to lost link with "
+ "pending Tx work\n");
+ adapter->flags2 |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+ }
+ }
+}
+
+/**
+ * txgbe_watchdog_subtask - check and bring link up
+ * @adapter - pointer to the device adapter structure
+ **/
+static void txgbe_watchdog_subtask(struct txgbe_adapter *adapter)
+{
+ u32 value = 0;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ /* if interface is down do nothing */
+ if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+ test_bit(__TXGBE_REMOVING, &adapter->state) ||
+ test_bit(__TXGBE_RESETTING, &adapter->state))
+ return;
+
+ if (hw->subsystem_device_id == TXGBE_ID_WX1820_KR_KX_KX4 ||
+ hw->subsystem_device_id == TXGBE_ID_SP1000_KR_KX_KX4) {
+ txgbe_bp_watchdog_event(adapter);
+ }
+
+ if (BOND_CHECK_LINK_MODE == 1) {
+ value = rd32(hw, 0x14404);
+ value = value & 0x1;
+ if (value == 1)
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+ }
+ if (!(adapter->flags2 & TXGBE_FLAG2_LINK_DOWN))
+ txgbe_watchdog_update_link(adapter);
+
+ if (adapter->link_up)
+ txgbe_watchdog_link_is_up(adapter);
+ else
+ txgbe_watchdog_link_is_down(adapter);
+
+ txgbe_update_stats(adapter);
+
+ txgbe_watchdog_flush_tx(adapter);
+}
+
+/**
+ * txgbe_sfp_detection_subtask - poll for SFP+ cable
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_detection_subtask(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_mac_info *mac = &hw->mac;
+ s32 err;
+
+ /* not searching for SFP so there is nothing to do here */
+ if (!(adapter->flags2 & TXGBE_FLAG2_SEARCH_FOR_SFP) &&
+ !(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+ return;
+
+ if (adapter->sfp_poll_time &&
+ time_after(adapter->sfp_poll_time, jiffies))
+ return; /* If not yet time to poll for SFP */
+
+ /* someone else is in init, wait until next service event */
+ if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+ return;
+
+ adapter->sfp_poll_time = jiffies + TXGBE_SFP_POLL_JIFFIES - 1;
+
+ err = TCALL(hw, phy.ops.identify_sfp);
+ if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
+ goto sfp_out;
+
+ if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
+ /* If no cable is present, then we need to reset
+ * the next time we find a good cable. */
+ adapter->flags2 |= TXGBE_FLAG2_SFP_NEEDS_RESET;
+ }
+
+ /* exit on error */
+ if (err)
+ goto sfp_out;
+
+ /* exit if reset not needed */
+ if (!(adapter->flags2 & TXGBE_FLAG2_SFP_NEEDS_RESET))
+ goto sfp_out;
+
+ adapter->flags2 &= ~TXGBE_FLAG2_SFP_NEEDS_RESET;
+
+ if (hw->phy.multispeed_fiber) {
+ /* Set up dual speed SFP+ support */
+ mac->ops.setup_link = txgbe_setup_mac_link_multispeed_fiber;
+ mac->ops.setup_mac_link = txgbe_setup_mac_link;
+ mac->ops.set_rate_select_speed =
+ txgbe_set_hard_rate_select_speed;
+ } else {
+ mac->ops.setup_link = txgbe_setup_mac_link;
+ mac->ops.set_rate_select_speed =
+ txgbe_set_hard_rate_select_speed;
+ hw->phy.autoneg_advertised = 0;
+ }
+
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+ e_info(probe, "detected SFP+: %d\n", hw->phy.sfp_type);
+
+sfp_out:
+ clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+
+ if ((err == TXGBE_ERR_SFP_NOT_SUPPORTED) &&
+ adapter->netdev_registered) {
+ e_dev_err("failed to initialize because an unsupported "
+ "SFP+ module type was detected.\n");
+ }
+}
+
+/**
+ * txgbe_sfp_link_config_subtask - set up link SFP after module install
+ * @adapter - the txgbe adapter structure
+ **/
+static void txgbe_sfp_link_config_subtask(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 speed;
+ bool autoneg = false;
+ u16 value;
+ u8 device_type = hw->subsystem_id & 0xF0;
+
+ if (!(adapter->flags & TXGBE_FLAG_NEED_LINK_CONFIG))
+ return;
+
+ /* someone else is in init, wait until next service event */
+ if (test_and_set_bit(__TXGBE_IN_SFP_INIT, &adapter->state))
+ return;
+
+ adapter->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+
+ if (device_type == TXGBE_ID_XAUI) {
+ /* clear ext phy int status */
+ txgbe_read_mdio(&hw->phy_dev, hw->phy.addr, 0x03, 0x8011, &value);
+ if (value & 0x400)
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+ if (!(value & 0x800)) {
+ clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+ return;
+ }
+ }
+
+ if (device_type == TXGBE_ID_MAC_XAUI ||
+ (txgbe_get_media_type(hw) == txgbe_media_type_copper &&
+ device_type == TXGBE_ID_SFI_XAUI)) {
+ speed = TXGBE_LINK_SPEED_10GB_FULL;
+ } else if (device_type == TXGBE_ID_MAC_SGMII) {
+ speed = TXGBE_LINK_SPEED_1GB_FULL;
+ } else {
+ speed = hw->phy.autoneg_advertised;
+ if ((!speed) && (hw->mac.ops.get_link_capabilities)) {
+ TCALL(hw, mac.ops.get_link_capabilities, &speed, &autoneg);
+ /* setup the highest link when no autoneg */
+ if (!autoneg) {
+ if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+ speed = TXGBE_LINK_SPEED_10GB_FULL;
+ }
+ }
+ }
+
+ TCALL(hw, mac.ops.setup_link, speed, txgbe_is_sfp(hw));
+
+ adapter->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+ adapter->link_check_timeout = jiffies;
+ clear_bit(__TXGBE_IN_SFP_INIT, &adapter->state);
+}
+
+static void txgbe_sfp_reset_eth_phy_subtask(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 speed;
+ bool linkup = true;
+ u32 i = 0;
+
+ if (!(adapter->flags2 & TXGBE_FLAG_NEED_ETH_PHY_RESET))
+ return;
+
+ adapter->flags2 &= ~TXGBE_FLAG_NEED_ETH_PHY_RESET;
+
+ TCALL(hw, mac.ops.check_link, &speed, &linkup, false);
+ if (!linkup) {
+ txgbe_wr32_epcs(hw, TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1,
+ 0xA000);
+ /* wait phy initialization done */
+ for (i = 0; i < TXGBE_PHY_INIT_DONE_POLLING_TIME; i++) {
+ if ((txgbe_rd32_epcs(hw,
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+ TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+ break;
+ msleep(100);
+ }
+ }
+}
+
+/**
+ * txgbe_service_timer - Timer Call-back
+ * @data: pointer to adapter cast into an unsigned long
+ **/
+static void txgbe_service_timer(struct timer_list *t)
+{
+ struct txgbe_adapter *adapter = from_timer(adapter, t, service_timer);
+ unsigned long next_event_offset;
+ struct txgbe_hw *hw = &adapter->hw;
+
+ /* poll faster when waiting for link */
+ if (adapter->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
+ if ((hw->subsystem_device_id & 0xF0) == TXGBE_ID_KR_KX_KX4)
+ next_event_offset = HZ;
+ else if (BOND_CHECK_LINK_MODE == 1)
+ next_event_offset = HZ / 100;
+ else
+ next_event_offset = HZ / 10;
+ } else
+ next_event_offset = HZ * 2;
+
+ if ((rd32(&adapter->hw, TXGBE_MIS_PF_SM) == 1) && (hw->bus.lan_id)) {
+ adapter->flags2 |= TXGBE_FLAG2_PCIE_NEED_RECOVER;
+ }
+
+ /* Reset the timer */
+ mod_timer(&adapter->service_timer, next_event_offset + jiffies);
+
+ txgbe_service_event_schedule(adapter);
+}
+
+static void txgbe_reset_subtask(struct txgbe_adapter *adapter)
+{
+ u32 reset_flag = 0;
+ u32 value = 0;
+
+ if (!(adapter->flags2 & (TXGBE_FLAG2_PF_RESET_REQUESTED |
+ TXGBE_FLAG2_DEV_RESET_REQUESTED |
+ TXGBE_FLAG2_GLOBAL_RESET_REQUESTED |
+ TXGBE_FLAG2_RESET_INTR_RECEIVED)))
+ return;
+
+ /* If we're already down, just bail */
+ if (test_bit(__TXGBE_DOWN, &adapter->state) ||
+ test_bit(__TXGBE_REMOVING, &adapter->state))
+ return;
+
+ netdev_err(adapter->netdev, "Reset adapter\n");
+ adapter->tx_timeout_count++;
+
+ rtnl_lock();
+ if (adapter->flags2 & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+ reset_flag |= TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+ adapter->flags2 &= ~TXGBE_FLAG2_GLOBAL_RESET_REQUESTED;
+ }
+ if (adapter->flags2 & TXGBE_FLAG2_DEV_RESET_REQUESTED) {
+ reset_flag |= TXGBE_FLAG2_DEV_RESET_REQUESTED;
+ adapter->flags2 &= ~TXGBE_FLAG2_DEV_RESET_REQUESTED;
+ }
+ if (adapter->flags2 & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+ reset_flag |= TXGBE_FLAG2_PF_RESET_REQUESTED;
+ adapter->flags2 &= ~TXGBE_FLAG2_PF_RESET_REQUESTED;
+ }
+
+ if (adapter->flags2 & TXGBE_FLAG2_RESET_INTR_RECEIVED) {
+ /* If there's a recovery already waiting, it takes
+ * precedence before starting a new reset sequence.
+ */
+ adapter->flags2 &= ~TXGBE_FLAG2_RESET_INTR_RECEIVED;
+ value = rd32m(&adapter->hw, TXGBE_MIS_RST_ST,
+ TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK) >>
+ TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT;
+ if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST) {
+ adapter->hw.reset_type = TXGBE_SW_RESET;
+ /* errata 7 */
+ if (txgbe_mng_present(&adapter->hw) &&
+ adapter->hw.revision_id == TXGBE_SP_MPW)
+ adapter->flags2 |=
+ TXGBE_FLAG2_MNG_REG_ACCESS_DISABLED;
+ } else if (value == TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST)
+ adapter->hw.reset_type = TXGBE_GLOBAL_RESET;
+ adapter->hw.force_full_reset = true;
+ txgbe_reinit_locked(adapter);
+ adapter->hw.force_full_reset = false;
+ goto unlock;
+ }
+
+ if (reset_flag & TXGBE_FLAG2_DEV_RESET_REQUESTED) {
+ /* Request a Device Reset
+ *
+ * This will start the chip's countdown to the actual full
+ * chip reset event, and a warning interrupt to be sent
+ * to all PFs, including the requestor. Our handler
+ * for the warning interrupt will deal with the shutdown
+ * and recovery of the switch setup.
+ */
+ /*debug to up*/
+ /*txgbe_dump(adapter);*/
+ if (txgbe_mng_present(&adapter->hw)) {
+ txgbe_reset_hostif(&adapter->hw);
+ } else
+ wr32m(&adapter->hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_SW_RST, TXGBE_MIS_RST_SW_RST);
+
+ } else if (reset_flag & TXGBE_FLAG2_PF_RESET_REQUESTED) {
+ /*debug to up*/
+ txgbe_reinit_locked(adapter);
+ } else if (reset_flag & TXGBE_FLAG2_GLOBAL_RESET_REQUESTED) {
+ /* Request a Global Reset
+ *
+ * This will start the chip's countdown to the actual full
+ * chip reset event, and a warning interrupt to be sent
+ * to all PFs, including the requestor. Our handler
+ * for the warning interrupt will deal with the shutdown
+ * and recovery of the switch setup.
+ */
+ /*debug to up*/
+ pci_save_state(adapter->pdev);
+ if (txgbe_mng_present(&adapter->hw)) {
+ txgbe_reset_hostif(&adapter->hw);
+ } else
+ wr32m(&adapter->hw, TXGBE_MIS_RST,
+ TXGBE_MIS_RST_GLOBAL_RST,
+ TXGBE_MIS_RST_GLOBAL_RST);
+
+ }
+
+unlock:
+ rtnl_unlock();
+}
+
+static void txgbe_check_pcie_subtask(struct txgbe_adapter *adapter)
+{
+ if (!(adapter->flags2 & TXGBE_FLAG2_PCIE_NEED_RECOVER))
+ return;
+
+ e_info(probe, "do recovery\n");
+ wr32m(&adapter->hw, TXGBE_MIS_PF_SM,
+ TXGBE_MIS_PF_SM_SM, 0);
+ adapter->flags2 &= ~TXGBE_FLAG2_PCIE_NEED_RECOVER;
+}
+
+/**
+ * txgbe_service_task - manages and runs subtasks
+ * @work: pointer to work_struct containing our data
+ **/
+static void txgbe_service_task(struct work_struct *work)
+{
+ struct txgbe_adapter *adapter = container_of(work,
+ struct txgbe_adapter,
+ service_task);
+ if (TXGBE_REMOVED(adapter->hw.hw_addr)) {
+ if (!test_bit(__TXGBE_DOWN, &adapter->state)) {
+ rtnl_lock();
+ txgbe_down(adapter);
+ rtnl_unlock();
+ }
+ txgbe_service_event_complete(adapter);
+ return;
+ }
+
+ if (adapter->flags2 & TXGBE_FLAG2_VXLAN_REREG_NEEDED) {
+ adapter->flags2 &= ~TXGBE_FLAG2_VXLAN_REREG_NEEDED;
+ udp_tunnel_get_rx_info(adapter->netdev);
+ }
+
+ txgbe_check_pcie_subtask(adapter);
+ txgbe_reset_subtask(adapter);
+ txgbe_sfp_detection_subtask(adapter);
+ txgbe_sfp_link_config_subtask(adapter);
+ txgbe_sfp_reset_eth_phy_subtask(adapter);
+ txgbe_check_overtemp_subtask(adapter);
+ txgbe_watchdog_subtask(adapter);
+ txgbe_fdir_reinit_subtask(adapter);
+ txgbe_check_hang_subtask(adapter);
+ if (test_bit(__TXGBE_PTP_RUNNING, &adapter->state)) {
+ txgbe_ptp_overflow_check(adapter);
+ if (unlikely(adapter->flags &
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER))
+ txgbe_ptp_rx_hang(adapter);
+ }
+
+ txgbe_service_event_complete(adapter);
+}
+
+static u8 get_ipv6_proto(struct sk_buff *skb, int offset)
+{
+ struct ipv6hdr *hdr = (struct ipv6hdr *)(skb->data + offset);
+ u8 nexthdr = hdr->nexthdr;
+
+ offset += sizeof(struct ipv6hdr);
+
+ while (ipv6_ext_hdr(nexthdr)) {
+ struct ipv6_opt_hdr _hdr, *hp;
+
+ if (nexthdr == NEXTHDR_NONE)
+ break;
+
+ hp = skb_header_pointer(skb, offset, sizeof(_hdr), &_hdr);
+ if (!hp)
+ break;
+
+ if (nexthdr == NEXTHDR_FRAGMENT) {
+ break;
+ } else if (nexthdr == NEXTHDR_AUTH) {
+ offset += ipv6_authlen(hp);
+ } else {
+ offset += ipv6_optlen(hp);
+ }
+
+ nexthdr = hp->nexthdr;
+ }
+
+ return nexthdr;
+}
+
+union network_header {
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ void *raw;
+};
+
+static txgbe_dptype encode_tx_desc_ptype(const struct txgbe_tx_buffer *first)
+{
+ struct sk_buff *skb = first->skb;
+
+ u8 tun_prot = 0;
+
+ u8 l4_prot = 0;
+ u8 ptype = 0;
+
+ if (skb->encapsulation) {
+ union network_header hdr;
+
+ switch (first->protocol) {
+ case __constant_htons(ETH_P_IP):
+ tun_prot = ip_hdr(skb)->protocol;
+ if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET))
+ goto encap_frag;
+ ptype = TXGBE_PTYPE_TUN_IPV4;
+ break;
+ case __constant_htons(ETH_P_IPV6):
+ tun_prot = get_ipv6_proto(skb, skb_network_offset(skb));
+ if (tun_prot == NEXTHDR_FRAGMENT)
+ goto encap_frag;
+ ptype = TXGBE_PTYPE_TUN_IPV6;
+ break;
+ default:
+ goto exit;
+ }
+
+ if (tun_prot == IPPROTO_IPIP) {
+ hdr.raw = (void *)inner_ip_hdr(skb);
+ ptype |= TXGBE_PTYPE_PKT_IPIP;
+ } else if (tun_prot == IPPROTO_UDP) {
+ hdr.raw = (void *)inner_ip_hdr(skb);
+ /* fixme: VXLAN-GPE neither ETHER nor IP */
+
+ if (skb->inner_protocol_type != ENCAP_TYPE_ETHER ||
+ skb->inner_protocol != htons(ETH_P_TEB)) {
+ ptype |= TXGBE_PTYPE_PKT_IG;
+ } else {
+ if (((struct ethhdr *)
+ skb_inner_mac_header(skb))->h_proto
+ == htons(ETH_P_8021Q)) {
+ ptype |= TXGBE_PTYPE_PKT_IGMV;
+ } else {
+ ptype |= TXGBE_PTYPE_PKT_IGM;
+ }
+ }
+
+ } else if (tun_prot == IPPROTO_GRE) {
+ hdr.raw = (void *)inner_ip_hdr(skb);
+ if (skb->inner_protocol == htons(ETH_P_IP) ||
+ skb->inner_protocol == htons(ETH_P_IPV6)) {
+ ptype |= TXGBE_PTYPE_PKT_IG;
+ } else {
+ if (((struct ethhdr *)
+ skb_inner_mac_header(skb))->h_proto
+ == htons(ETH_P_8021Q)) {
+ ptype |= TXGBE_PTYPE_PKT_IGMV;
+ } else {
+ ptype |= TXGBE_PTYPE_PKT_IGM;
+ }
+ }
+ } else {
+ goto exit;
+ }
+
+ switch (hdr.ipv4->version) {
+ case IPVERSION:
+ l4_prot = hdr.ipv4->protocol;
+ if (hdr.ipv4->frag_off & htons(IP_MF | IP_OFFSET)) {
+ ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+ goto exit;
+ }
+ break;
+ case 6:
+ l4_prot = get_ipv6_proto(skb,
+ skb_inner_network_offset(skb));
+ ptype |= TXGBE_PTYPE_PKT_IPV6;
+ if (l4_prot == NEXTHDR_FRAGMENT) {
+ ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+ goto exit;
+ }
+ break;
+ default:
+ goto exit;
+ }
+ } else {
+encap_frag:
+
+ switch (first->protocol) {
+ case __constant_htons(ETH_P_IP):
+ l4_prot = ip_hdr(skb)->protocol;
+ ptype = TXGBE_PTYPE_PKT_IP;
+ if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) {
+ ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+ goto exit;
+ }
+ break;
+#ifdef NETIF_F_IPV6_CSUM
+ case __constant_htons(ETH_P_IPV6):
+ l4_prot = get_ipv6_proto(skb, skb_network_offset(skb));
+ ptype = TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6;
+ if (l4_prot == NEXTHDR_FRAGMENT) {
+ ptype |= TXGBE_PTYPE_TYP_IPFRAG;
+ goto exit;
+ }
+ break;
+#endif /* NETIF_F_IPV6_CSUM */
+ case __constant_htons(ETH_P_1588):
+ ptype = TXGBE_PTYPE_L2_TS;
+ goto exit;
+ case __constant_htons(ETH_P_FIP):
+ ptype = TXGBE_PTYPE_L2_FIP;
+ goto exit;
+ case __constant_htons(TXGBE_ETH_P_LLDP):
+ ptype = TXGBE_PTYPE_L2_LLDP;
+ goto exit;
+ case __constant_htons(TXGBE_ETH_P_CNM):
+ ptype = TXGBE_PTYPE_L2_CNM;
+ goto exit;
+ case __constant_htons(ETH_P_PAE):
+ ptype = TXGBE_PTYPE_L2_EAPOL;
+ goto exit;
+ case __constant_htons(ETH_P_ARP):
+ ptype = TXGBE_PTYPE_L2_ARP;
+ goto exit;
+ default:
+ ptype = TXGBE_PTYPE_L2_MAC;
+ goto exit;
+ }
+
+ }
+
+ switch (l4_prot) {
+ case IPPROTO_TCP:
+ ptype |= TXGBE_PTYPE_TYP_TCP;
+ break;
+ case IPPROTO_UDP:
+ ptype |= TXGBE_PTYPE_TYP_UDP;
+ break;
+ case IPPROTO_SCTP:
+ ptype |= TXGBE_PTYPE_TYP_SCTP;
+ break;
+ default:
+ ptype |= TXGBE_PTYPE_TYP_IP;
+ break;
+ }
+
+exit:
+ return txgbe_decode_ptype(ptype);
+}
+
+static int txgbe_tso(struct txgbe_ring *tx_ring,
+ struct txgbe_tx_buffer *first,
+ u8 *hdr_len, txgbe_dptype dptype)
+{
+ struct sk_buff *skb = first->skb;
+ u32 vlan_macip_lens, type_tucmd;
+ u32 mss_l4len_idx, l4len;
+ struct tcphdr *tcph;
+ struct iphdr *iph;
+ u32 tunhdr_eiplen_tunlen = 0;
+
+ u8 tun_prot = 0;
+ bool enc = skb->encapsulation;
+
+ struct ipv6hdr *ipv6h;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ if (skb_header_cloned(skb)) {
+ int err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
+ if (err)
+ return err;
+ }
+
+ iph = enc ? inner_ip_hdr(skb) : ip_hdr(skb);
+
+ if (iph->version == 4) {
+
+ tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb);
+
+ iph->tot_len = 0;
+ iph->check = 0;
+ tcph->check = ~csum_tcpudp_magic(iph->saddr,
+ iph->daddr, 0,
+ IPPROTO_TCP,
+ 0);
+ first->tx_flags |= TXGBE_TX_FLAGS_TSO |
+ TXGBE_TX_FLAGS_CSUM |
+ TXGBE_TX_FLAGS_IPV4 |
+ TXGBE_TX_FLAGS_CC;
+
+ } else if (iph->version == 6 && skb_is_gso_v6(skb)) {
+
+ ipv6h = enc ? inner_ipv6_hdr(skb) : ipv6_hdr(skb);
+ tcph = enc ? inner_tcp_hdr(skb) : tcp_hdr(skb);
+
+ ipv6h->payload_len = 0;
+ tcph->check =
+ ~csum_ipv6_magic(&ipv6h->saddr,
+ &ipv6h->daddr,
+ 0, IPPROTO_TCP, 0);
+ first->tx_flags |= TXGBE_TX_FLAGS_TSO |
+ TXGBE_TX_FLAGS_CSUM |
+ TXGBE_TX_FLAGS_CC;
+ }
+
+ /* compute header lengths */
+
+ l4len = enc ? inner_tcp_hdrlen(skb) : tcp_hdrlen(skb);
+ *hdr_len = enc ? (skb_inner_transport_header(skb) - skb->data)
+ : skb_transport_offset(skb);
+ *hdr_len += l4len;
+
+ /* update gso size and bytecount with header size */
+ first->gso_segs = skb_shinfo(skb)->gso_segs;
+ first->bytecount += (first->gso_segs - 1) * *hdr_len;
+
+ /* mss_l4len_id: use 0 as index for TSO */
+ mss_l4len_idx = l4len << TXGBE_TXD_L4LEN_SHIFT;
+ mss_l4len_idx |= skb_shinfo(skb)->gso_size << TXGBE_TXD_MSS_SHIFT;
+
+ /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
+
+ if (enc) {
+ switch (first->protocol) {
+ case __constant_htons(ETH_P_IP):
+ tun_prot = ip_hdr(skb)->protocol;
+ first->tx_flags |= TXGBE_TX_FLAGS_OUTER_IPV4;
+ break;
+ case __constant_htons(ETH_P_IPV6):
+ tun_prot = ipv6_hdr(skb)->nexthdr;
+ break;
+ default:
+ break;
+ }
+ switch (tun_prot) {
+ case IPPROTO_UDP:
+ tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP;
+ tunhdr_eiplen_tunlen |=
+ ((skb_network_header_len(skb) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+ (((skb_inner_mac_header(skb) -
+ skb_transport_header(skb)) >> 1) <<
+ TXGBE_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_GRE:
+ tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE;
+ tunhdr_eiplen_tunlen |=
+ ((skb_network_header_len(skb) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+ (((skb_inner_mac_header(skb) -
+ skb_transport_header(skb)) >> 1) <<
+ TXGBE_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_IPIP:
+ tunhdr_eiplen_tunlen = (((char *)inner_ip_hdr(skb)-
+ (char *)ip_hdr(skb)) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT;
+ break;
+ default:
+ break;
+ }
+
+ vlan_macip_lens = skb_inner_network_header_len(skb) >> 1;
+ } else
+ vlan_macip_lens = skb_network_header_len(skb) >> 1;
+
+ vlan_macip_lens |= skb_network_offset(skb) << TXGBE_TXD_MACLEN_SHIFT;
+ vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK;
+
+ type_tucmd = dptype.ptype << 24;
+ txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen,
+ type_tucmd, mss_l4len_idx);
+
+ return 1;
+}
+
+static void txgbe_tx_csum(struct txgbe_ring *tx_ring,
+ struct txgbe_tx_buffer *first, txgbe_dptype dptype)
+{
+ struct sk_buff *skb = first->skb;
+ u32 vlan_macip_lens = 0;
+ u32 mss_l4len_idx = 0;
+ u32 tunhdr_eiplen_tunlen = 0;
+
+ u8 tun_prot = 0;
+
+ u32 type_tucmd;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ if (!(first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN) &&
+ !(first->tx_flags & TXGBE_TX_FLAGS_CC))
+ return;
+ vlan_macip_lens = skb_network_offset(skb) <<
+ TXGBE_TXD_MACLEN_SHIFT;
+ } else {
+ u8 l4_prot = 0;
+
+ union {
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ u8 *raw;
+ } network_hdr;
+ union {
+ struct tcphdr *tcphdr;
+ u8 *raw;
+ } transport_hdr;
+
+ if (skb->encapsulation) {
+ network_hdr.raw = skb_inner_network_header(skb);
+ transport_hdr.raw = skb_inner_transport_header(skb);
+ vlan_macip_lens = skb_network_offset(skb) <<
+ TXGBE_TXD_MACLEN_SHIFT;
+ switch (first->protocol) {
+ case __constant_htons(ETH_P_IP):
+ tun_prot = ip_hdr(skb)->protocol;
+ break;
+ case __constant_htons(ETH_P_IPV6):
+ tun_prot = ipv6_hdr(skb)->nexthdr;
+ break;
+ default:
+ if (unlikely(net_ratelimit())) {
+ dev_warn(tx_ring->dev,
+ "partial checksum but version=%d\n",
+ network_hdr.ipv4->version);
+ }
+ return;
+ }
+ switch (tun_prot) {
+ case IPPROTO_UDP:
+ tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_UDP;
+ tunhdr_eiplen_tunlen |=
+ ((skb_network_header_len(skb) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+ (((skb_inner_mac_header(skb) -
+ skb_transport_header(skb)) >> 1) <<
+ TXGBE_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_GRE:
+ tunhdr_eiplen_tunlen = TXGBE_TXD_TUNNEL_GRE;
+ tunhdr_eiplen_tunlen |=
+ ((skb_network_header_len(skb) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT) |
+ (((skb_inner_mac_header(skb) -
+ skb_transport_header(skb)) >> 1) <<
+ TXGBE_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_IPIP:
+ tunhdr_eiplen_tunlen =
+ (((char *)inner_ip_hdr(skb)-
+ (char *)ip_hdr(skb)) >> 2) <<
+ TXGBE_TXD_OUTER_IPLEN_SHIFT;
+ break;
+ default:
+ break;
+ }
+
+ } else {
+ network_hdr.raw = skb_network_header(skb);
+ transport_hdr.raw = skb_transport_header(skb);
+ vlan_macip_lens = skb_network_offset(skb) <<
+ TXGBE_TXD_MACLEN_SHIFT;
+ }
+
+ switch (network_hdr.ipv4->version) {
+ case IPVERSION:
+ vlan_macip_lens |=
+ (transport_hdr.raw - network_hdr.raw) >> 1;
+ l4_prot = network_hdr.ipv4->protocol;
+ break;
+ case 6:
+ vlan_macip_lens |=
+ (transport_hdr.raw - network_hdr.raw) >> 1;
+ l4_prot = network_hdr.ipv6->nexthdr;
+ break;
+ default:
+ break;
+ }
+
+ switch (l4_prot) {
+ case IPPROTO_TCP:
+
+ mss_l4len_idx = (transport_hdr.tcphdr->doff * 4) <<
+ TXGBE_TXD_L4LEN_SHIFT;
+ break;
+ case IPPROTO_SCTP:
+ mss_l4len_idx = sizeof(struct sctphdr) <<
+ TXGBE_TXD_L4LEN_SHIFT;
+ break;
+ case IPPROTO_UDP:
+ mss_l4len_idx = sizeof(struct udphdr) <<
+ TXGBE_TXD_L4LEN_SHIFT;
+ break;
+ default:
+ break;
+ }
+
+ /* update TX checksum flag */
+ first->tx_flags |= TXGBE_TX_FLAGS_CSUM;
+ }
+ first->tx_flags |= TXGBE_TX_FLAGS_CC;
+ /* vlan_macip_lens: MACLEN, VLAN tag */
+ vlan_macip_lens |= first->tx_flags & TXGBE_TX_FLAGS_VLAN_MASK;
+
+ type_tucmd = dptype.ptype << 24;
+ txgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, tunhdr_eiplen_tunlen,
+ type_tucmd, mss_l4len_idx);
+}
+
+static u32 txgbe_tx_cmd_type(u32 tx_flags)
+{
+ /* set type for advanced descriptor with frame checksum insertion */
+ u32 cmd_type = TXGBE_TXD_DTYP_DATA |
+ TXGBE_TXD_IFCS;
+
+ /* set HW vlan bit if vlan is present */
+ cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_HW_VLAN,
+ TXGBE_TXD_VLE);
+
+ /* set segmentation enable bits for TSO/FSO */
+ cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSO,
+ TXGBE_TXD_TSE);
+
+ /* set timestamp bit if present */
+ cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_TSTAMP,
+ TXGBE_TXD_MAC_TSTAMP);
+
+ cmd_type |= TXGBE_SET_FLAG(tx_flags, TXGBE_TX_FLAGS_LINKSEC,
+ TXGBE_TXD_LINKSEC);
+
+ return cmd_type;
+}
+
+static void txgbe_tx_olinfo_status(union txgbe_tx_desc *tx_desc,
+ u32 tx_flags, unsigned int paylen)
+{
+ u32 olinfo_status = paylen << TXGBE_TXD_PAYLEN_SHIFT;
+
+ /* enable L4 checksum for TSO and TX checksum offload */
+ olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+ TXGBE_TX_FLAGS_CSUM,
+ TXGBE_TXD_L4CS);
+
+ /* enble IPv4 checksum for TSO */
+ olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+ TXGBE_TX_FLAGS_IPV4,
+ TXGBE_TXD_IIPCS);
+ /* enable outer IPv4 checksum for TSO */
+ olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+ TXGBE_TX_FLAGS_OUTER_IPV4,
+ TXGBE_TXD_EIPCS);
+ /*
+ * Check Context must be set if Tx switch is enabled, which it
+ * always is for case where virtual functions are running
+ */
+ olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+ TXGBE_TX_FLAGS_CC,
+ TXGBE_TXD_CC);
+
+ olinfo_status |= TXGBE_SET_FLAG(tx_flags,
+ TXGBE_TX_FLAGS_IPSEC,
+ TXGBE_TXD_IPSEC);
+
+ tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
+}
+
+static int __txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size)
+{
+ netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+
+ /* Herbert's original patch had:
+ * smp_mb__after_netif_stop_queue();
+ * but since that doesn't exist yet, just open code it.
+ */
+ smp_mb();
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ if (likely(txgbe_desc_unused(tx_ring) < size))
+ return -EBUSY;
+
+ /* A reprieve! - use start_queue because it doesn't call schedule */
+ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ return 0;
+}
+
+static inline int txgbe_maybe_stop_tx(struct txgbe_ring *tx_ring, u16 size)
+{
+ if (likely(txgbe_desc_unused(tx_ring) >= size))
+ return 0;
+
+ return __txgbe_maybe_stop_tx(tx_ring, size);
+}
+
+#define TXGBE_TXD_CMD (TXGBE_TXD_EOP | \
+ TXGBE_TXD_RS)
+
+static int txgbe_tx_map(struct txgbe_ring *tx_ring,
+ struct txgbe_tx_buffer *first,
+ const u8 hdr_len)
+{
+ struct sk_buff *skb = first->skb;
+ struct txgbe_tx_buffer *tx_buffer;
+ union txgbe_tx_desc *tx_desc;
+ skb_frag_t *frag;
+ dma_addr_t dma;
+ unsigned int data_len, size;
+ u32 tx_flags = first->tx_flags;
+ u32 cmd_type = txgbe_tx_cmd_type(tx_flags);
+ u16 i = tx_ring->next_to_use;
+
+ tx_desc = TXGBE_TX_DESC(tx_ring, i);
+
+ txgbe_tx_olinfo_status(tx_desc, tx_flags, skb->len - hdr_len);
+
+ size = skb_headlen(skb);
+ data_len = skb->data_len;
+
+ dma = dma_map_single(tx_ring->dev, skb->data, size, DMA_TO_DEVICE);
+
+ tx_buffer = first;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ if (dma_mapping_error(tx_ring->dev, dma))
+ goto dma_error;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buffer, len, size);
+ dma_unmap_addr_set(tx_buffer, dma, dma);
+
+ tx_desc->read.buffer_addr = cpu_to_le64(dma);
+
+ while (unlikely(size > TXGBE_MAX_DATA_PER_TXD)) {
+ tx_desc->read.cmd_type_len =
+ cpu_to_le32(cmd_type ^ TXGBE_MAX_DATA_PER_TXD);
+
+ i++;
+ tx_desc++;
+ if (i == tx_ring->count) {
+ tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+ tx_desc->read.olinfo_status = 0;
+
+ dma += TXGBE_MAX_DATA_PER_TXD;
+ size -= TXGBE_MAX_DATA_PER_TXD;
+
+ tx_desc->read.buffer_addr = cpu_to_le64(dma);
+ }
+
+ if (likely(!data_len))
+ break;
+
+ tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type ^ size);
+
+ i++;
+ tx_desc++;
+ if (i == tx_ring->count) {
+ tx_desc = TXGBE_TX_DESC(tx_ring, 0);
+ i = 0;
+ }
+ tx_desc->read.olinfo_status = 0;
+
+ size = skb_frag_size(frag);
+
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_ring->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ }
+
+ /* write last descriptor with RS and EOP bits */
+ cmd_type |= size | TXGBE_TXD_CMD;
+ tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
+
+ netdev_tx_sent_queue(txring_txq(tx_ring), first->bytecount);
+
+ /* set the timestamp */
+ first->time_stamp = jiffies;
+
+ /*
+ * Force memory writes to complete before letting h/w know there
+ * are new descriptors to fetch. (Only applicable for weak-ordered
+ * memory model archs, such as IA-64).
+ *
+ * We also need this memory barrier to make certain all of the
+ * status bits have been updated before next_to_watch is written.
+ */
+ wmb();
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ i++;
+ if (i == tx_ring->count)
+ i = 0;
+
+ tx_ring->next_to_use = i;
+
+ txgbe_maybe_stop_tx(tx_ring, DESC_NEEDED);
+
+ skb_tx_timestamp(skb);
+
+ if (netif_xmit_stopped(txring_txq(tx_ring)) || !netdev_xmit_more()) {
+ writel(i, tx_ring->tail);
+ /* The following mmiowb() is required on certain
+ * architechtures (IA64/Altix in particular) in order to
+ * synchronize the I/O calls with respect to a spin lock. This
+ * is because the wmb() on those architectures does not
+ * guarantee anything for posted I/O writes.
+ *
+ * Note that the associated spin_unlock() is not within the
+ * driver code, but in the networking core stack.
+ */
+ mmiowb();
+ }
+
+ return 0;
+dma_error:
+ dev_err(tx_ring->dev, "TX DMA map failed\n");
+
+ /* clear dma mappings for failed tx_buffer_info map */
+ for (;;) {
+ tx_buffer = &tx_ring->tx_buffer_info[i];
+ if (dma_unmap_len(tx_buffer, len))
+ dma_unmap_page(tx_ring->dev,
+ dma_unmap_addr(tx_buffer, dma),
+ dma_unmap_len(tx_buffer, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buffer, len, 0);
+ if (tx_buffer == first)
+ break;
+ if (i == 0)
+ i += tx_ring->count;
+ i--;
+ }
+
+ dev_kfree_skb_any(first->skb);
+ first->skb = NULL;
+
+ tx_ring->next_to_use = i;
+
+ return -1;
+}
+
+static void txgbe_atr(struct txgbe_ring *ring,
+ struct txgbe_tx_buffer *first,
+ txgbe_dptype dptype)
+{
+ struct txgbe_q_vector *q_vector = ring->q_vector;
+ union txgbe_atr_hash_dword input = { .dword = 0 };
+ union txgbe_atr_hash_dword common = { .dword = 0 };
+ union network_header hdr;
+ struct tcphdr *th;
+
+ /* if ring doesn't have a interrupt vector, cannot perform ATR */
+ if (!q_vector)
+ return;
+
+ /* do nothing if sampling is disabled */
+ if (!ring->atr_sample_rate)
+ return;
+
+ ring->atr_count++;
+
+ if (dptype.etype) {
+ if (TXGBE_PTYPE_TYP_TCP != TXGBE_PTYPE_TYPL4(dptype.ptype))
+ return;
+ hdr.raw = (void *)skb_inner_network_header(first->skb);
+ th = inner_tcp_hdr(first->skb);
+ } else
+
+ {
+ if (TXGBE_PTYPE_PKT_IP != TXGBE_PTYPE_PKT(dptype.ptype) ||
+ TXGBE_PTYPE_TYP_TCP != TXGBE_PTYPE_TYPL4(dptype.ptype))
+ return;
+ hdr.raw = (void *)skb_network_header(first->skb);
+ th = tcp_hdr(first->skb);
+ }
+
+ /* skip this packet since it is invalid or the socket is closing */
+ if (!th || th->fin)
+ return;
+
+ /* sample on all syn packets or once every atr sample count */
+ if (!th->syn && (ring->atr_count < ring->atr_sample_rate))
+ return;
+
+ /* reset sample count */
+ ring->atr_count = 0;
+
+ /*
+ * src and dst are inverted, think how the receiver sees them
+ *
+ * The input is broken into two sections, a non-compressed section
+ * containing vm_pool, vlan_id, and flow_type. The rest of the data
+ * is XORed together and stored in the compressed dword.
+ */
+ input.formatted.vlan_id = htons((u16)dptype.ptype);
+
+ /*
+ * since src port and flex bytes occupy the same word XOR them together
+ * and write the value to source port portion of compressed dword
+ */
+ if (first->tx_flags & TXGBE_TX_FLAGS_SW_VLAN)
+ common.port.src ^= th->dest ^ first->skb->protocol;
+ else if (first->tx_flags & TXGBE_TX_FLAGS_HW_VLAN)
+ common.port.src ^= th->dest ^ first->skb->vlan_proto;
+ else
+ common.port.src ^= th->dest ^ first->protocol;
+ common.port.dst ^= th->source;
+
+ if (TXGBE_PTYPE_PKT_IPV6 & TXGBE_PTYPE_PKT(dptype.ptype)) {
+ input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV6;
+ common.ip ^= hdr.ipv6->saddr.s6_addr32[0] ^
+ hdr.ipv6->saddr.s6_addr32[1] ^
+ hdr.ipv6->saddr.s6_addr32[2] ^
+ hdr.ipv6->saddr.s6_addr32[3] ^
+ hdr.ipv6->daddr.s6_addr32[0] ^
+ hdr.ipv6->daddr.s6_addr32[1] ^
+ hdr.ipv6->daddr.s6_addr32[2] ^
+ hdr.ipv6->daddr.s6_addr32[3];
+ } else {
+ input.formatted.flow_type = TXGBE_ATR_FLOW_TYPE_TCPV4;
+ common.ip ^= hdr.ipv4->saddr ^ hdr.ipv4->daddr;
+ }
+
+ /* This assumes the Rx queue and Tx queue are bound to the same CPU */
+ txgbe_fdir_add_signature_filter(&q_vector->adapter->hw,
+ input, common, ring->queue_index);
+}
+
+/**
+ * skb_pad - zero pad the tail of an skb
+ * @skb: buffer to pad
+ * @pad: space to pad
+ *
+ * Ensure that a buffer is followed by a padding area that is zero
+ * filled. Used by network drivers which may DMA or transfer data
+ * beyond the buffer end onto the wire.
+ *
+ * May return error in out of memory cases. The skb is freed on error.
+ */
+
+int txgbe_skb_pad_nonzero(struct sk_buff *skb, int pad)
+{
+ int err;
+ int ntail;
+
+ /* If the skbuff is non linear tailroom is always zero.. */
+ if (!skb_cloned(skb) && skb_tailroom(skb) >= pad) {
+ memset(skb->data+skb->len, 0x1, pad);
+ return 0;
+ }
+
+ ntail = skb->data_len + pad - (skb->end - skb->tail);
+ if (likely(skb_cloned(skb) || ntail > 0)) {
+ err = pskb_expand_head(skb, 0, ntail, GFP_ATOMIC);
+ if (unlikely(err))
+ goto free_skb;
+ }
+
+ /* FIXME: The use of this function with non-linear skb's really needs
+ * to be audited.
+ */
+ err = skb_linearize(skb);
+ if (unlikely(err))
+ goto free_skb;
+
+ memset(skb->data + skb->len, 0x1, pad);
+ return 0;
+
+free_skb:
+ kfree_skb(skb);
+ return err;
+}
+
+netdev_tx_t txgbe_xmit_frame_ring(struct sk_buff *skb,
+ struct txgbe_adapter *adapter,
+ struct txgbe_ring *tx_ring)
+{
+ struct txgbe_tx_buffer *first;
+ int tso;
+ u32 tx_flags = 0;
+ unsigned short f;
+ u16 count = TXD_USE_COUNT(skb_headlen(skb));
+ __be16 protocol = skb->protocol;
+ u8 hdr_len = 0;
+ txgbe_dptype dptype;
+
+ /* work around hw errata 3 */
+ u16 _llcLen, *llcLen;
+ llcLen = skb_header_pointer(skb, ETH_HLEN - 2, sizeof(u16), &_llcLen);
+ if (*llcLen == 0x3 || *llcLen == 0x4 || *llcLen == 0x5) {
+ if (txgbe_skb_pad_nonzero(skb, ETH_ZLEN - skb->len))
+ return -ENOMEM;
+ __skb_put(skb, ETH_ZLEN - skb->len);
+ }
+
+ /*
+ * need: 1 descriptor per page * PAGE_SIZE/TXGBE_MAX_DATA_PER_TXD,
+ * + 1 desc for skb_headlen/TXGBE_MAX_DATA_PER_TXD,
+ * + 2 desc gap to keep tail from touching head,
+ * + 1 desc for context descriptor,
+ * otherwise try next time
+ */
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+ count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->
+ frags[f]));
+
+ if (txgbe_maybe_stop_tx(tx_ring, count + 3)) {
+ tx_ring->tx_stats.tx_busy++;
+ return NETDEV_TX_BUSY;
+ }
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+ first->skb = skb;
+ first->bytecount = skb->len;
+ first->gso_segs = 1;
+
+ /* if we have a HW VLAN tag being added default to the HW one */
+ if (skb_vlan_tag_present(skb)) {
+ tx_flags |= skb_vlan_tag_get(skb) << TXGBE_TX_FLAGS_VLAN_SHIFT;
+ tx_flags |= TXGBE_TX_FLAGS_HW_VLAN;
+ /* else if it is a SW VLAN check the next protocol and store the tag */
+ } else if (protocol == htons(ETH_P_8021Q)) {
+ struct vlan_hdr *vhdr, _vhdr;
+ vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr);
+ if (!vhdr)
+ goto out_drop;
+
+ protocol = vhdr->h_vlan_encapsulated_proto;
+ tx_flags |= ntohs(vhdr->h_vlan_TCI) <<
+ TXGBE_TX_FLAGS_VLAN_SHIFT;
+ tx_flags |= TXGBE_TX_FLAGS_SW_VLAN;
+ }
+
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ adapter->ptp_clock) {
+ if (!test_and_set_bit_lock(__TXGBE_PTP_TX_IN_PROGRESS,
+ &adapter->state)) {
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ tx_flags |= TXGBE_TX_FLAGS_TSTAMP;
+
+ /* schedule check for Tx timestamp */
+ adapter->ptp_tx_skb = skb_get(skb);
+ adapter->ptp_tx_start = jiffies;
+ schedule_work(&adapter->ptp_tx_work);
+ } else {
+ adapter->tx_hwtstamp_skipped++;
+ }
+ }
+
+ if ((adapter->flags & TXGBE_FLAG_DCB_ENABLED) &&
+ ((tx_flags & (TXGBE_TX_FLAGS_HW_VLAN | TXGBE_TX_FLAGS_SW_VLAN)) ||
+ (skb->priority != TC_PRIO_CONTROL))) {
+ tx_flags &= ~TXGBE_TX_FLAGS_VLAN_PRIO_MASK;
+ tx_flags |= skb->priority <<
+ TXGBE_TX_FLAGS_VLAN_PRIO_SHIFT;
+ if (tx_flags & TXGBE_TX_FLAGS_SW_VLAN) {
+ struct vlan_ethhdr *vhdr;
+ if (skb_header_cloned(skb) &&
+ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ goto out_drop;
+ vhdr = (struct vlan_ethhdr *)skb->data;
+ vhdr->h_vlan_TCI = htons(tx_flags >>
+ TXGBE_TX_FLAGS_VLAN_SHIFT);
+ } else {
+ tx_flags |= TXGBE_TX_FLAGS_HW_VLAN;
+ }
+ }
+
+ /* record initial flags and protocol */
+ first->tx_flags = tx_flags;
+ first->protocol = protocol;
+
+ dptype = encode_tx_desc_ptype(first);
+
+ tso = txgbe_tso(tx_ring, first, &hdr_len, dptype);
+ if (tso < 0)
+ goto out_drop;
+ else if (!tso)
+ txgbe_tx_csum(tx_ring, first, dptype);
+
+ /* add the ATR filter if ATR is on */
+ if (test_bit(__TXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
+ txgbe_atr(tx_ring, first, dptype);
+
+ if (txgbe_tx_map(tx_ring, first, hdr_len))
+ goto cleanup_tx_tstamp;
+
+ return NETDEV_TX_OK;
+
+out_drop:
+ dev_kfree_skb_any(first->skb);
+ first->skb = NULL;
+
+cleanup_tx_tstamp:
+ if (unlikely(tx_flags & TXGBE_TX_FLAGS_TSTAMP)) {
+ dev_kfree_skb_any(adapter->ptp_tx_skb);
+ adapter->ptp_tx_skb = NULL;
+ cancel_work_sync(&adapter->ptp_tx_work);
+ clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state);
+ }
+
+ return NETDEV_TX_OK;
+}
+
+static netdev_tx_t txgbe_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_ring *tx_ring;
+ unsigned int r_idx = skb->queue_mapping;
+
+ if (!netif_carrier_ok(netdev)) {
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ /*
+ * The minimum packet size for olinfo paylen is 17 so pad the skb
+ * in order to meet this minimum size requirement.
+ */
+ if (skb_put_padto(skb, 17))
+ return NETDEV_TX_OK;
+
+ if (r_idx >= adapter->num_tx_queues)
+ r_idx = r_idx % adapter->num_tx_queues;
+ tx_ring = adapter->tx_ring[r_idx];
+
+ return txgbe_xmit_frame_ring(skb, adapter, tx_ring);
+}
+
+/**
+ * txgbe_set_mac - Change the Ethernet Address of the NIC
+ * @netdev: network interface device structure
+ * @p: pointer to an address structure
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int txgbe_set_mac(struct net_device *netdev, void *p)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ txgbe_del_mac_filter(adapter, hw->mac.addr, VMDQ_P(0));
+ memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ memcpy(hw->mac.addr, addr->sa_data, netdev->addr_len);
+
+ txgbe_mac_set_default_filter(adapter, hw->mac.addr);
+
+ return 0;
+}
+
+/**
+ * txgbe_add_sanmac_netdev - Add the SAN MAC address to the corresponding
+ * netdev->dev_addr_list
+ * @netdev: network interface device structure
+ *
+ * Returns non-zero on failure
+ **/
+static int txgbe_add_sanmac_netdev(struct net_device *dev)
+{
+ int err = 0;
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct txgbe_hw *hw = &adapter->hw;
+
+ if (is_valid_ether_addr(hw->mac.san_addr)) {
+ rtnl_lock();
+ err = dev_addr_add(dev, hw->mac.san_addr,
+ NETDEV_HW_ADDR_T_SAN);
+ rtnl_unlock();
+
+ /* update SAN MAC vmdq pool selection */
+ TCALL(hw, mac.ops.set_vmdq_san_mac, VMDQ_P(0));
+ }
+ return err;
+}
+
+/**
+ * txgbe_del_sanmac_netdev - Removes the SAN MAC address to the corresponding
+ * netdev->dev_addr_list
+ * @netdev: network interface device structure
+ *
+ * Returns non-zero on failure
+ **/
+static int txgbe_del_sanmac_netdev(struct net_device *dev)
+{
+ int err = 0;
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct txgbe_mac_info *mac = &adapter->hw.mac;
+
+ if (is_valid_ether_addr(mac->san_addr)) {
+ rtnl_lock();
+ err = dev_addr_del(dev, mac->san_addr, NETDEV_HW_ADDR_T_SAN);
+ rtnl_unlock();
+ }
+ return err;
+}
+
+static int txgbe_mii_ioctl(struct net_device *netdev, struct ifreq *ifr,
+ int cmd)
+{
+ struct mii_ioctl_data *mii = (struct mii_ioctl_data *) &ifr->ifr_data;
+ int prtad, devad, ret;
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 value = 0;
+
+ prtad = (mii->phy_id & MDIO_PHY_ID_PRTAD) >> 5;
+ devad = (mii->phy_id & MDIO_PHY_ID_DEVAD);
+
+ if (cmd == SIOCGMIIREG) {
+ ret = txgbe_read_mdio(&hw->phy_dev, prtad, devad, mii->reg_num,
+ &value);
+ if (ret < 0)
+ return ret;
+ mii->val_out = value;
+ return 0;
+ } else {
+ return txgbe_write_mdio(&hw->phy_dev, prtad, devad,
+ mii->reg_num, mii->val_in);
+ }
+}
+
+static int txgbe_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ switch (cmd) {
+ case SIOCGHWTSTAMP:
+ return txgbe_ptp_get_ts_config(adapter, ifr);
+ case SIOCSHWTSTAMP:
+ return txgbe_ptp_set_ts_config(adapter, ifr);
+ case SIOCGMIIREG:
+ case SIOCSMIIREG:
+ return txgbe_mii_ioctl(netdev, ifr, cmd);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+/* txgbe_validate_rtr - verify 802.1Qp to Rx packet buffer mapping is valid.
+ * @adapter: pointer to txgbe_adapter
+ * @tc: number of traffic classes currently enabled
+ *
+ * Configure a valid 802.1Qp to Rx packet buffer mapping ie confirm
+ * 802.1Q priority maps to a packet buffer that exists.
+ */
+static void txgbe_validate_rtr(struct txgbe_adapter *adapter, u8 tc)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 reg, rsave;
+
+ reg = rd32(hw, TXGBE_RDB_UP2TC);
+ rsave = reg;
+ if (reg != rsave)
+ wr32(hw, TXGBE_RDB_UP2TC, reg);
+
+ return;
+}
+
+/**
+ * txgbe_set_prio_tc_map - Configure netdev prio tc map
+ * @adapter: Pointer to adapter struct
+ *
+ * Populate the netdev user priority to tc map
+ */
+static void txgbe_set_prio_tc_map(struct txgbe_adapter __maybe_unused *adapter)
+{
+ UNREFERENCED_PARAMETER(adapter);
+}
+
+/**
+ * txgbe_setup_tc - routine to configure net_device for multiple traffic
+ * classes.
+ *
+ * @netdev: net device to configure
+ * @tc: number of traffic classes to enable
+ */
+int txgbe_setup_tc(struct net_device *dev, u8 tc)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+
+ if (tc && adapter->num_vmdqs > TXGBE_MAX_DCBMACVLANS)
+ return -EBUSY;
+
+ /* Hardware has to reinitialize queues and interrupts to
+ * match packet buffer alignment. Unfortunately, the
+ * hardware is not flexible enough to do this dynamically.
+ */
+ if (netif_running(dev))
+ txgbe_close(dev);
+ else
+ txgbe_reset(adapter);
+
+ txgbe_clear_interrupt_scheme(adapter);
+
+ if (tc) {
+ netdev_set_num_tc(dev, tc);
+ txgbe_set_prio_tc_map(adapter);
+ } else {
+ netdev_reset_tc(dev);
+ }
+
+ txgbe_validate_rtr(adapter, tc);
+
+ txgbe_init_interrupt_scheme(adapter);
+ if (netif_running(dev))
+ txgbe_open(dev);
+
+ return 0;
+}
+
+static int txgbe_setup_tc_mqprio(struct net_device *dev,
+ struct tc_mqprio_qopt *mqprio)
+{
+ mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
+ return txgbe_setup_tc(dev, mqprio->num_tc);
+}
+
+static int __txgbe_setup_tc(struct net_device *dev, enum tc_setup_type type,
+ void *type_data)
+{
+ switch (type) {
+ case TC_SETUP_QDISC_MQPRIO:
+ return txgbe_setup_tc_mqprio(dev, type_data);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+void txgbe_do_reset(struct net_device *netdev)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ if (netif_running(netdev))
+ txgbe_reinit_locked(adapter);
+ else
+ txgbe_reset(adapter);
+}
+
+static netdev_features_t txgbe_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+
+ /* If Rx checksum is disabled, then RSC/LRO should also be disabled */
+ if (!(features & NETIF_F_RXCSUM))
+ features &= ~NETIF_F_LRO;
+
+ /* Turn off LRO if not RSC capable */
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE))
+ features &= ~NETIF_F_LRO;
+
+ return features;
+}
+
+static int txgbe_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct txgbe_adapter *adapter = netdev_priv(netdev);
+ bool need_reset = false;
+
+ /* Make sure RSC matches LRO, reset if change */
+ if (!(features & NETIF_F_LRO)) {
+ if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+ need_reset = true;
+ adapter->flags2 &= ~TXGBE_FLAG2_RSC_ENABLED;
+ } else if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) &&
+ !(adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)) {
+ if (adapter->rx_itr_setting == 1 ||
+ adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR) {
+ adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+ need_reset = true;
+ } else if ((netdev->features ^ features) & NETIF_F_LRO) {
+
+ e_info(probe, "rx-usecs set too low, "
+ "disabling RSC\n");
+ }
+ }
+
+ /*
+ * Check if Flow Director n-tuple support was enabled or disabled. If
+ * the state changed, we need to reset.
+ */
+ switch (features & NETIF_F_NTUPLE) {
+ case NETIF_F_NTUPLE:
+ /* turn off ATR, enable perfect filters and reset */
+ if (!(adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE))
+ need_reset = true;
+
+ adapter->flags &= ~TXGBE_FLAG_FDIR_HASH_CAPABLE;
+ adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+ break;
+ default:
+ /* turn off perfect filters, enable ATR and reset */
+ if (adapter->flags & TXGBE_FLAG_FDIR_PERFECT_CAPABLE)
+ need_reset = true;
+
+ adapter->flags &= ~TXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+
+ /* We cannot enable ATR if VMDq is enabled */
+ if (adapter->flags & TXGBE_FLAG_VMDQ_ENABLED)
+ break;
+
+ /* We cannot enable ATR if we have 2 or more traffic classes */
+ if (netdev_get_num_tc(netdev) > 1)
+ break;
+
+ /* We cannot enable ATR if RSS is disabled */
+ if (adapter->ring_feature[RING_F_RSS].limit <= 1)
+ break;
+
+ /* A sample rate of 0 indicates ATR disabled */
+ if (!adapter->atr_sample_rate)
+ break;
+
+ adapter->flags |= TXGBE_FLAG_FDIR_HASH_CAPABLE;
+ break;
+ }
+
+ if (features & NETIF_F_HW_VLAN_CTAG_RX)
+ txgbe_vlan_strip_enable(adapter);
+ else
+ txgbe_vlan_strip_disable(adapter);
+
+ if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE &&
+ features & NETIF_F_RXCSUM) {
+ if (!need_reset)
+ adapter->flags2 |= TXGBE_FLAG2_VXLAN_REREG_NEEDED;
+ } else {
+ txgbe_clear_vxlan_port(adapter);
+ }
+
+ if (features & NETIF_F_RXHASH) {
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)) {
+ wr32m(&adapter->hw, TXGBE_RDB_RA_CTL,
+ TXGBE_RDB_RA_CTL_RSS_EN, TXGBE_RDB_RA_CTL_RSS_EN);
+ adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED;
+ }
+ } else {
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED) {
+ wr32m(&adapter->hw, TXGBE_RDB_RA_CTL,
+ TXGBE_RDB_RA_CTL_RSS_EN, ~TXGBE_RDB_RA_CTL_RSS_EN);
+ adapter->flags2 &= ~TXGBE_FLAG2_RSS_ENABLED;
+ }
+ }
+
+ if (need_reset)
+ txgbe_do_reset(netdev);
+
+ return 0;
+}
+
+/**
+ * txgbe_add_udp_tunnel_port - Get notifications about adding UDP tunnel ports
+ * @dev: The port's netdev
+ * @ti: Tunnel endpoint information
+ **/
+static void txgbe_add_udp_tunnel_port(struct net_device *dev,
+ struct udp_tunnel_info *ti)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct txgbe_hw *hw = &adapter->hw;
+ __be16 port = ti->port;
+
+ if (ti->sa_family != AF_INET)
+ return;
+
+ switch (ti->type) {
+ case UDP_TUNNEL_TYPE_VXLAN:
+ if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
+ return;
+
+ if (adapter->vxlan_port == port)
+ return;
+
+ if (adapter->vxlan_port) {
+ netdev_info(dev,
+ "VXLAN port %d set, not adding port %d\n",
+ ntohs(adapter->vxlan_port),
+ ntohs(port));
+ return;
+ }
+
+ adapter->vxlan_port = port;
+ wr32(hw, TXGBE_CFG_VXLAN, port);
+ break;
+ case UDP_TUNNEL_TYPE_GENEVE:
+ if (adapter->geneve_port == port)
+ return;
+
+ if (adapter->geneve_port) {
+ netdev_info(dev,
+ "GENEVE port %d set, not adding port %d\n",
+ ntohs(adapter->geneve_port),
+ ntohs(port));
+ return;
+ }
+
+ adapter->geneve_port = port;
+ wr32(hw, TXGBE_CFG_GENEVE, port);
+ break;
+ default:
+ return;
+ }
+}
+
+/**
+ * ixgbe_del_udp_tunnel_port - Get notifications about removing UDP tunnel ports
+ * @dev: The port's netdev
+ * @ti: Tunnel endpoint information
+ **/
+static void txgbe_del_udp_tunnel_port(struct net_device *dev,
+ struct udp_tunnel_info *ti)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+
+ if (ti->type != UDP_TUNNEL_TYPE_VXLAN &&
+ ti->type != UDP_TUNNEL_TYPE_GENEVE)
+ return;
+
+ if (ti->sa_family != AF_INET)
+ return;
+
+ switch (ti->type) {
+ case UDP_TUNNEL_TYPE_VXLAN:
+ if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE))
+ return;
+
+ if (adapter->vxlan_port != ti->port) {
+ netdev_info(dev, "VXLAN port %d not found\n",
+ ntohs(ti->port));
+ return;
+ }
+
+ txgbe_clear_vxlan_port(adapter);
+ adapter->flags2 |= TXGBE_FLAG2_VXLAN_REREG_NEEDED;
+ break;
+ case UDP_TUNNEL_TYPE_GENEVE:
+ if (adapter->geneve_port != ti->port) {
+ netdev_info(dev, "GENEVE port %d not found\n",
+ ntohs(ti->port));
+ return;
+ }
+
+ adapter->geneve_port = 0;
+ wr32(&adapter->hw, TXGBE_CFG_GENEVE, 0);
+ break;
+ default:
+ return;
+ }
+}
+
+static int txgbe_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ struct net_device *dev,
+ const unsigned char *addr,
+ u16 vid,
+ u16 flags)
+{
+ /* guarantee we can provide a unique filter for the unicast address */
+ if (is_unicast_ether_addr(addr) || is_link_local_ether_addr(addr)) {
+ if (TXGBE_MAX_PF_MACVLANS <= netdev_uc_count(dev))
+ return -ENOMEM;
+ }
+
+ return ndo_dflt_fdb_add(ndm, tb, dev, addr, vid, flags);
+}
+
+static int txgbe_ndo_bridge_setlink(struct net_device *dev,
+ struct nlmsghdr *nlh,
+ __always_unused u16 flags)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ struct nlattr *attr, *br_spec;
+ int rem;
+
+ if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED))
+ return -EOPNOTSUPP;
+
+ br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+
+ nla_for_each_nested(attr, br_spec, rem) {
+ __u16 mode;
+
+ if (nla_type(attr) != IFLA_BRIDGE_MODE)
+ continue;
+
+ mode = nla_get_u16(attr);
+ if (mode == BRIDGE_MODE_VEPA) {
+ adapter->flags |= TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE;
+ } else if (mode == BRIDGE_MODE_VEB) {
+ adapter->flags &= ~TXGBE_FLAG_SRIOV_VEPA_BRIDGE_MODE;
+ } else {
+ return -EINVAL;
+ }
+
+ adapter->bridge_mode = mode;
+
+ /* re-configure settings related to bridge mode */
+ txgbe_configure_bridge_mode(adapter);
+
+ e_info(drv, "enabling bridge mode: %s\n",
+ mode == BRIDGE_MODE_VEPA ? "VEPA" : "VEB");
+ }
+
+ return 0;
+}
+
+static int txgbe_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
+ struct net_device *dev,
+ u32 __maybe_unused filter_mask,
+ int nlflags)
+{
+ struct txgbe_adapter *adapter = netdev_priv(dev);
+ u16 mode;
+
+ if (!(adapter->flags & TXGBE_FLAG_SRIOV_ENABLED))
+ return 0;
+
+ mode = adapter->bridge_mode;
+ return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 0, 0, nlflags,
+ filter_mask, NULL);
+}
+
+#define TXGBE_MAX_TUNNEL_HDR_LEN 80
+static netdev_features_t
+txgbe_features_check(struct sk_buff *skb, struct net_device *dev,
+ netdev_features_t features)
+{
+ u32 vlan_num = 0;
+ u16 vlan_depth = skb->mac_len;
+ __be16 type = skb->protocol;
+ struct vlan_hdr *vh;
+
+ if (skb_vlan_tag_present(skb)) {
+ vlan_num++;
+ }
+
+ if (vlan_depth) {
+ vlan_depth -= VLAN_HLEN;
+ } else {
+ vlan_depth = ETH_HLEN;
+ }
+
+ while (type == htons(ETH_P_8021Q) || type == htons(ETH_P_8021AD)) {
+ vlan_num++;
+ vh = (struct vlan_hdr *)(skb->data + vlan_depth);
+ type = vh->h_vlan_encapsulated_proto;
+ vlan_depth += VLAN_HLEN;
+
+ }
+
+ if (vlan_num > 2)
+ features &= ~(NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_STAG_TX);
+
+ if (skb->encapsulation) {
+ if (unlikely(skb_inner_mac_header(skb) -
+ skb_transport_header(skb) >
+ TXGBE_MAX_TUNNEL_HDR_LEN))
+ return features & ~NETIF_F_CSUM_MASK;
+ }
+ return features;
+}
+
+static const struct net_device_ops txgbe_netdev_ops = {
+ .ndo_open = txgbe_open,
+ .ndo_stop = txgbe_close,
+ .ndo_start_xmit = txgbe_xmit_frame,
+ .ndo_set_rx_mode = txgbe_set_rx_mode,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = txgbe_set_mac,
+ .ndo_change_mtu = txgbe_change_mtu,
+ .ndo_tx_timeout = txgbe_tx_timeout,
+ .ndo_vlan_rx_add_vid = txgbe_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = txgbe_vlan_rx_kill_vid,
+ .ndo_do_ioctl = txgbe_ioctl,
+ .ndo_get_stats64 = txgbe_get_stats64,
+ .ndo_setup_tc = __txgbe_setup_tc,
+ .ndo_fdb_add = txgbe_ndo_fdb_add,
+ .ndo_bridge_setlink = txgbe_ndo_bridge_setlink,
+ .ndo_bridge_getlink = txgbe_ndo_bridge_getlink,
+ .ndo_udp_tunnel_add = txgbe_add_udp_tunnel_port,
+ .ndo_udp_tunnel_del = txgbe_del_udp_tunnel_port,
+ .ndo_features_check = txgbe_features_check,
+ .ndo_set_features = txgbe_set_features,
+ .ndo_fix_features = txgbe_fix_features,
+};
+
+void txgbe_assign_netdev_ops(struct net_device *dev)
+{
+ dev->netdev_ops = &txgbe_netdev_ops;
+ txgbe_set_ethtool_ops(dev);
+ dev->watchdog_timeo = 5 * HZ;
+}
+
+/**
+ * txgbe_wol_supported - Check whether device supports WoL
+ * @adapter: the adapter private structure
+ * @device_id: the device ID
+ * @subdev_id: the subsystem device ID
+ *
+ * This function is used by probe and ethtool to determine
+ * which devices have WoL support
+ *
+ **/
+int txgbe_wol_supported(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u16 wol_cap = adapter->eeprom_cap & TXGBE_DEVICE_CAPS_WOL_MASK;
+
+ /* check eeprom to see if WOL is enabled */
+ if ((wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0_1) ||
+ ((wol_cap == TXGBE_DEVICE_CAPS_WOL_PORT0) &&
+ (hw->bus.func == 0)))
+ return true;
+ else
+ return false;
+}
+
+/**
+ * txgbe_probe - Device Initialization Routine
+ * @pdev: PCI device information struct
+ * @ent: entry in txgbe_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ *
+ * txgbe_probe initializes an adapter identified by a pci_dev structure.
+ * The OS initialization, configuring of the adapter private structure,
+ * and a hardware reset occur.
+ **/
+static int txgbe_probe(struct pci_dev *pdev,
+ const struct pci_device_id __always_unused *ent)
+{
+ struct net_device *netdev;
+ struct txgbe_adapter *adapter = NULL;
+ struct txgbe_hw *hw = NULL;
+ static int cards_found;
+ int err, pci_using_dac, expected_gts;
+ u16 offset = 0;
+ u16 eeprom_verh = 0, eeprom_verl = 0;
+ u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0;
+ u32 etrack_id = 0;
+ u16 build = 0, major = 0, patch = 0;
+ char *info_string, *i_s_var;
+ u8 part_str[TXGBE_PBANUM_LENGTH];
+ unsigned int indices = MAX_TX_QUEUES;
+
+ bool disable_dev = false;
+/* #ifndef NETIF_F_GSO_PARTIA */
+ netdev_features_t hw_features;
+
+ err = pci_enable_device_mem(pdev);
+ if (err)
+ return err;
+
+ if (!dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64)) &&
+ !dma_set_coherent_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(64))) {
+ pci_using_dac = 1;
+ } else {
+ err = dma_set_mask(pci_dev_to_dev(pdev), DMA_BIT_MASK(32));
+ if (err) {
+ err = dma_set_coherent_mask(pci_dev_to_dev(pdev),
+ DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(pci_dev_to_dev(pdev), "No usable DMA "
+ "configuration, aborting\n");
+ goto err_dma;
+ }
+ }
+ pci_using_dac = 0;
+ }
+
+ err = pci_request_selected_regions(pdev, pci_select_bars(pdev,
+ IORESOURCE_MEM), txgbe_driver_name);
+ if (err) {
+ dev_err(pci_dev_to_dev(pdev),
+ "pci_request_selected_regions failed 0x%x\n", err);
+ goto err_pci_reg;
+ }
+
+ hw = vmalloc(sizeof(struct txgbe_hw));
+ if (!hw) {
+ pr_info("Unable to allocate memory for early mac check\n");
+ } else {
+ hw->vendor_id = pdev->vendor;
+ hw->device_id = pdev->device;
+ vfree(hw);
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_set_master(pdev);
+ /* errata 16 */
+ pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_READRQ,
+ 0x1000);
+
+ netdev = alloc_etherdev_mq(sizeof(struct txgbe_adapter), indices);
+ if (!netdev) {
+ err = -ENOMEM;
+ goto err_alloc_etherdev;
+ }
+
+ SET_NETDEV_DEV(netdev, pci_dev_to_dev(pdev));
+
+ adapter = netdev_priv(netdev);
+ adapter->netdev = netdev;
+ adapter->pdev = pdev;
+ hw = &adapter->hw;
+ hw->back = adapter;
+ adapter->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+
+ hw->hw_addr = ioremap(pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ adapter->io_addr = hw->hw_addr;
+ if (!hw->hw_addr) {
+ err = -EIO;
+ goto err_ioremap;
+ }
+
+ txgbe_assign_netdev_ops(netdev);
+ strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ adapter->bd_number = cards_found;
+
+ /* setup the private structure */
+ err = txgbe_sw_init(adapter);
+ if (err)
+ goto err_sw_init;
+
+ /*
+ * check_options must be called before setup_link to set up
+ * hw->fc completely
+ */
+ txgbe_check_options(adapter);
+ txgbe_bp_mode_setting(adapter);
+ TCALL(hw, mac.ops.set_lan_id);
+
+ /* check if flash load is done after hw power up */
+ err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PERST);
+ if (err)
+ goto err_sw_init;
+ err = txgbe_check_flash_load(hw, TXGBE_SPI_ILDR_STATUS_PWRRST);
+ if (err)
+ goto err_sw_init;
+
+ /* reset_hw fills in the perm_addr as well */
+ hw->phy.reset_if_overtemp = true;
+ err = TCALL(hw, mac.ops.reset_hw);
+ hw->phy.reset_if_overtemp = false;
+ if (err == TXGBE_ERR_SFP_NOT_PRESENT) {
+ err = 0;
+ } else if (err == TXGBE_ERR_SFP_NOT_SUPPORTED) {
+ e_dev_err("failed to load because an unsupported SFP+ "
+ "module type was detected.\n");
+ e_dev_err("Reload the driver after installing a supported "
+ "module.\n");
+ goto err_sw_init;
+ } else if (err) {
+ e_dev_err("HW Init failed: %d\n", err);
+ goto err_sw_init;
+ }
+
+ netdev->features |= NETIF_F_SG |
+ NETIF_F_IP_CSUM;
+
+#ifdef NETIF_F_IPV6_CSUM
+ netdev->features |= NETIF_F_IPV6_CSUM;
+#endif
+
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_CTAG_RX;
+
+ netdev->features |= txgbe_tso_features();
+
+ if (adapter->flags2 & TXGBE_FLAG2_RSS_ENABLED)
+ netdev->features |= NETIF_F_RXHASH;
+
+ netdev->features |= NETIF_F_RXCSUM;
+
+ /* copy netdev features into list of user selectable features */
+ hw_features = netdev->hw_features;
+ hw_features |= netdev->features;
+
+ /* give us the option of enabling RSC/LRO later */
+ if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE)
+ hw_features |= NETIF_F_LRO;
+
+ /* set this bit last since it cannot be part of hw_features */
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+
+ netdev->features |= NETIF_F_NTUPLE;
+
+ adapter->flags |= TXGBE_FLAG_FDIR_PERFECT_CAPABLE;
+ hw_features |= NETIF_F_NTUPLE;
+ netdev->hw_features = hw_features;
+
+ netdev->vlan_features |= NETIF_F_SG |
+ NETIF_F_IP_CSUM |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_TSO |
+ NETIF_F_TSO6;
+
+ netdev->hw_enc_features |= NETIF_F_SG | NETIF_F_IP_CSUM |
+ TXGBE_GSO_PARTIAL_FEATURES | NETIF_F_TSO;
+ if (netdev->features & NETIF_F_LRO) {
+ if ((adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) &&
+ ((adapter->rx_itr_setting == 1) ||
+ (adapter->rx_itr_setting > TXGBE_MIN_RSC_ITR))) {
+ adapter->flags2 |= TXGBE_FLAG2_RSC_ENABLED;
+ } else if (adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE) {
+ e_dev_info("InterruptThrottleRate set too high, "
+ "disabling RSC\n");
+ }
+ }
+
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+ netdev->priv_flags |= IFF_SUPP_NOFCS;
+
+ netdev->min_mtu = ETH_MIN_MTU;
+ netdev->max_mtu = TXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN);
+
+ if (pci_using_dac) {
+ netdev->features |= NETIF_F_HIGHDMA;
+ netdev->vlan_features |= NETIF_F_HIGHDMA;
+ }
+
+ /* make sure the EEPROM is good */
+ if (TCALL(hw, eeprom.ops.validate_checksum, NULL)) {
+ e_dev_err("The EEPROM Checksum Is Not Valid\n");
+ err = -EIO;
+ goto err_sw_init;
+ }
+
+ memcpy(netdev->dev_addr, hw->mac.perm_addr, netdev->addr_len);
+
+ if (!is_valid_ether_addr(netdev->dev_addr)) {
+ e_dev_err("invalid MAC address\n");
+ err = -EIO;
+ goto err_sw_init;
+ }
+
+ txgbe_mac_set_default_filter(adapter, hw->mac.perm_addr);
+
+ timer_setup(&adapter->service_timer, txgbe_service_timer, 0);
+
+ if (TXGBE_REMOVED(hw->hw_addr)) {
+ err = -EIO;
+ goto err_sw_init;
+ }
+ INIT_WORK(&adapter->service_task, txgbe_service_task);
+ set_bit(__TXGBE_SERVICE_INITED, &adapter->state);
+ clear_bit(__TXGBE_SERVICE_SCHED, &adapter->state);
+
+ err = txgbe_init_interrupt_scheme(adapter);
+ if (err)
+ goto err_sw_init;
+
+ /* WOL not supported for all devices */
+ adapter->wol = 0;
+ TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_DEVICE_CAPS,
+ &adapter->eeprom_cap);
+
+ if ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP &&
+ hw->bus.lan_id == 0) {
+ adapter->wol = TXGBE_PSR_WKUP_CTL_MAG;
+ wr32(hw, TXGBE_PSR_WKUP_CTL, adapter->wol);
+ }
+ hw->wol_enabled = !!(adapter->wol);
+
+ device_set_wakeup_enable(pci_dev_to_dev(adapter->pdev), adapter->wol);
+
+ /*
+ * Save off EEPROM version number and Option Rom version which
+ * together make a unique identify for the eeprom
+ */
+ TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_H,
+ &eeprom_verh);
+ TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_EEPROM_VERSION_L,
+ &eeprom_verl);
+ etrack_id = (eeprom_verh << 16) | eeprom_verl;
+
+ TCALL(hw, eeprom.ops.read,
+ hw->eeprom.sw_region_offset + TXGBE_ISCSI_BOOT_CONFIG, &offset);
+
+ /* Make sure offset to SCSI block is valid */
+ if (!(offset == 0x0) && !(offset == 0xffff)) {
+ TCALL(hw, eeprom.ops.read, offset + 0x84, &eeprom_cfg_blkh);
+ TCALL(hw, eeprom.ops.read, offset + 0x83, &eeprom_cfg_blkl);
+
+ /* Only display Option Rom if exist */
+ if (eeprom_cfg_blkl && eeprom_cfg_blkh) {
+ major = eeprom_cfg_blkl >> 8;
+ build = (eeprom_cfg_blkl << 8) | (eeprom_cfg_blkh >> 8);
+ patch = eeprom_cfg_blkh & 0x00ff;
+
+ snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ "0x%08x, %d.%d.%d", etrack_id, major, build,
+ patch);
+ } else {
+ snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ "0x%08x", etrack_id);
+ }
+ } else {
+ snprintf(adapter->eeprom_id, sizeof(adapter->eeprom_id),
+ "0x%08x", etrack_id);
+ }
+
+ /* reset the hardware with the new settings */
+ err = TCALL(hw, mac.ops.start_hw);
+ if (err == TXGBE_ERR_EEPROM_VERSION) {
+ /* We are running on a pre-production device, log a warning */
+ e_dev_warn("This device is a pre-production adapter/LOM. "
+ "Please be aware there may be issues associated "
+ "with your hardware. If you are experiencing "
+ "problems please contact your hardware "
+ "representative who provided you with this "
+ "hardware.\n");
+ } else if (err) {
+ e_dev_err("HW init failed\n");
+ goto err_register;
+ }
+
+ /* pick up the PCI bus settings for reporting later */
+ TCALL(hw, mac.ops.get_bus_info);
+
+ strcpy(netdev->name, "eth%d");
+ err = register_netdev(netdev);
+ if (err)
+ goto err_register;
+
+ pci_set_drvdata(pdev, adapter);
+ adapter->netdev_registered = true;
+
+ if (!((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP))
+ /* power down the optics for SFP+ fiber */
+ TCALL(hw, mac.ops.disable_tx_laser);
+
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+ /* keep stopping all the transmit queues for older kernels */
+ netif_tx_stop_all_queues(netdev);
+
+ /* print all messages at the end so that we use our eth%d name */
+
+ /* calculate the expected PCIe bandwidth required for optimal
+ * performance. Note that some older parts will never have enough
+ * bandwidth due to being older generation PCIe parts. We clamp these
+ * parts to ensure that no warning is displayed, as this could confuse
+ * users otherwise. */
+
+ expected_gts = txgbe_enumerate_functions(adapter) * 10;
+
+ /* don't check link if we failed to enumerate functions */
+ if (expected_gts > 0)
+ txgbe_check_minimum_link(adapter, expected_gts);
+
+ if ((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP)
+ e_info(probe, "NCSI : support");
+ else
+ e_info(probe, "NCSI : unsupported");
+
+ /* First try to read PBA as a string */
+ err = txgbe_read_pba_string(hw, part_str, TXGBE_PBANUM_LENGTH);
+ if (err)
+
+ strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH);
+ if (txgbe_is_sfp(hw) && hw->phy.sfp_type != txgbe_sfp_type_not_present)
+ e_info(probe, "PHY: %d, SFP+: %d, PBA No: %s\n",
+ hw->phy.type, hw->phy.sfp_type, part_str);
+ else
+ e_info(probe, "PHY: %d, PBA No: %s\n",
+ hw->phy.type, part_str);
+
+ e_dev_info("%02x:%02x:%02x:%02x:%02x:%02x\n",
+ netdev->dev_addr[0], netdev->dev_addr[1],
+ netdev->dev_addr[2], netdev->dev_addr[3],
+ netdev->dev_addr[4], netdev->dev_addr[5]);
+
+#define INFO_STRING_LEN 255
+ info_string = kzalloc(INFO_STRING_LEN, GFP_KERNEL);
+ if (!info_string) {
+ e_err(probe, "allocation for info string failed\n");
+ goto no_info_string;
+ }
+ i_s_var = info_string;
+ i_s_var += sprintf(info_string, "Enabled Features: ");
+ i_s_var += sprintf(i_s_var, "RxQ: %d TxQ: %d ",
+ adapter->num_rx_queues, adapter->num_tx_queues);
+ if (adapter->flags & TXGBE_FLAG_FDIR_HASH_CAPABLE)
+ i_s_var += sprintf(i_s_var, "FdirHash ");
+ if (adapter->flags & TXGBE_FLAG_DCB_ENABLED)
+ i_s_var += sprintf(i_s_var, "DCB ");
+ if (adapter->flags & TXGBE_FLAG_TPH_ENABLED)
+ i_s_var += sprintf(i_s_var, "TPH ");
+ if (adapter->flags2 & TXGBE_FLAG2_RSC_ENABLED)
+ i_s_var += sprintf(i_s_var, "RSC ");
+ if (adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE)
+ i_s_var += sprintf(i_s_var, "vxlan_rx ");
+
+ BUG_ON(i_s_var > (info_string + INFO_STRING_LEN));
+ /* end features printing */
+ e_info(probe, "%s\n", info_string);
+ kfree(info_string);
+no_info_string:
+ /* firmware requires blank driver version */
+ TCALL(hw, mac.ops.set_fw_drv_ver, 0xFF, 0xFF, 0xFF, 0xFF);
+
+ /* add san mac addr to netdev */
+ txgbe_add_sanmac_netdev(netdev);
+
+ e_info(probe, "WangXun(R) 10 Gigabit Network Connection\n");
+ cards_found++;
+
+ /* setup link for SFP devices with MNG FW, else wait for TXGBE_UP */
+ if (txgbe_mng_present(hw) && txgbe_is_sfp(hw))
+ TCALL(hw, mac.ops.setup_link,
+ TXGBE_LINK_SPEED_10GB_FULL | TXGBE_LINK_SPEED_1GB_FULL,
+ true);
+
+ TCALL(hw, mac.ops.setup_eee,
+ (adapter->flags2 & TXGBE_FLAG2_EEE_CAPABLE) &&
+ (adapter->flags2 & TXGBE_FLAG2_EEE_ENABLED));
+
+ return 0;
+
+err_register:
+ txgbe_clear_interrupt_scheme(adapter);
+ txgbe_release_hw_control(adapter);
+err_sw_init:
+ adapter->flags2 &= ~TXGBE_FLAG2_SEARCH_FOR_SFP;
+ kfree(adapter->mac_table);
+ iounmap(adapter->io_addr);
+err_ioremap:
+ disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
+ free_netdev(netdev);
+err_alloc_etherdev:
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+err_pci_reg:
+err_dma:
+ if (!adapter || disable_dev)
+ pci_disable_device(pdev);
+ return err;
+}
+
+/**
+ * txgbe_remove - Device Removal Routine
+ * @pdev: PCI device information struct
+ *
+ * txgbe_remove is called by the PCI subsystem to alert the driver
+ * that it should release a PCI device. The could be caused by a
+ * Hot-Plug event, or because the driver is going to be removed from
+ * memory.
+ **/
+static void txgbe_remove(struct pci_dev *pdev)
+{
+ struct txgbe_adapter *adapter = pci_get_drvdata(pdev);
+ struct net_device *netdev;
+ bool disable_dev;
+
+ /* if !adapter then we already cleaned up in probe */
+ if (!adapter)
+ return;
+
+ netdev = adapter->netdev;
+ set_bit(__TXGBE_REMOVING, &adapter->state);
+ cancel_work_sync(&adapter->service_task);
+
+ /* remove the added san mac */
+ txgbe_del_sanmac_netdev(netdev);
+
+ if (adapter->netdev_registered) {
+ unregister_netdev(netdev);
+ adapter->netdev_registered = false;
+ }
+
+ txgbe_clear_interrupt_scheme(adapter);
+ txgbe_release_hw_control(adapter);
+
+ iounmap(adapter->io_addr);
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+
+ kfree(adapter->mac_table);
+ disable_dev = !test_and_set_bit(__TXGBE_DISABLED, &adapter->state);
+ free_netdev(netdev);
+
+ pci_disable_pcie_error_reporting(pdev);
+
+ if (disable_dev)
+ pci_disable_device(pdev);
+}
+
+static bool txgbe_check_cfg_remove(struct txgbe_hw *hw, struct pci_dev *pdev)
+{
+ u16 value;
+
+ pci_read_config_word(pdev, PCI_VENDOR_ID, &value);
+ if (value == TXGBE_FAILED_READ_CFG_WORD) {
+ txgbe_remove_adapter(hw);
+ return true;
+ }
+ return false;
+}
+
+u16 txgbe_read_pci_cfg_word(struct txgbe_hw *hw, u32 reg)
+{
+ struct txgbe_adapter *adapter = hw->back;
+ u16 value;
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return TXGBE_FAILED_READ_CFG_WORD;
+ pci_read_config_word(adapter->pdev, reg, &value);
+ if (value == TXGBE_FAILED_READ_CFG_WORD &&
+ txgbe_check_cfg_remove(hw, adapter->pdev))
+ return TXGBE_FAILED_READ_CFG_WORD;
+ return value;
+}
+
+void txgbe_write_pci_cfg_word(struct txgbe_hw *hw, u32 reg, u16 value)
+{
+ struct txgbe_adapter *adapter = hw->back;
+
+ if (TXGBE_REMOVED(hw->hw_addr))
+ return;
+ pci_write_config_word(adapter->pdev, reg, value);
+}
+
+static struct pci_driver txgbe_driver = {
+ .name = txgbe_driver_name,
+ .id_table = txgbe_pci_tbl,
+ .probe = txgbe_probe,
+ .remove = txgbe_remove,
+#ifdef CONFIG_PM
+ .suspend = txgbe_suspend,
+ .resume = txgbe_resume,
+#endif
+ .shutdown = txgbe_shutdown,
+};
+
+/**
+ * txgbe_init_module - Driver Registration Routine
+ *
+ * txgbe_init_module is the first routine called when the driver is
+ * loaded. All it does is register with the PCI subsystem.
+ **/
+static int __init txgbe_init_module(void)
+{
+ int ret;
+ pr_info("%s - version %s\n", txgbe_driver_string, txgbe_driver_version);
+ pr_info("%s\n", txgbe_copyright);
+
+ txgbe_wq = create_singlethread_workqueue(txgbe_driver_name);
+ if (!txgbe_wq) {
+ pr_err("%s: Failed to create workqueue\n", txgbe_driver_name);
+ return -ENOMEM;
+ }
+
+ ret = pci_register_driver(&txgbe_driver);
+ return ret;
+}
+
+module_init(txgbe_init_module);
+
+/**
+ * txgbe_exit_module - Driver Exit Cleanup Routine
+ *
+ * txgbe_exit_module is called just before the driver is removed
+ * from memory.
+ **/
+static void __exit txgbe_exit_module(void)
+{
+ pci_unregister_driver(&txgbe_driver);
+ if (txgbe_wq) {
+ destroy_workqueue(txgbe_wq);
+ }
+}
+
+module_exit(txgbe_exit_module);
+
+/* txgbe_main.c */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
new file mode 100644
index 0000000000000..08c67fdccc161
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.c
@@ -0,0 +1,399 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_mbx.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#include "txgbe.h"
+#include "txgbe_mbx.h"
+
+/**
+ * txgbe_read_mbx - Reads a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to read
+ *
+ * returns SUCCESS if it successfuly read message from buffer
+ **/
+int txgbe_read_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+ int err = TXGBE_ERR_MBX;
+
+ /* limit read to size of mailbox */
+ if (size > mbx->size)
+ size = mbx->size;
+
+ err = TCALL(hw, mbx.ops.read, msg, size, mbx_id);
+
+ return err;
+}
+
+/**
+ * txgbe_write_mbx - Write a message to the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully copied message into the buffer
+ **/
+int txgbe_write_mbx(struct txgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+ int err = 0;
+
+ if (size > mbx->size) {
+ err = TXGBE_ERR_MBX;
+ ERROR_REPORT2(TXGBE_ERROR_ARGUMENT,
+ "Invalid mailbox message size %d", size);
+ } else
+ err = TCALL(hw, mbx.ops.write, msg, size, mbx_id);
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_msg - checks to see if someone sent us mail
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns SUCCESS if the Status bit was found or else ERR_MBX
+ **/
+int txgbe_check_for_msg(struct txgbe_hw *hw, u16 mbx_id)
+{
+ int err = TXGBE_ERR_MBX;
+
+ err = TCALL(hw, mbx.ops.check_for_msg, mbx_id);
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_ack - checks to see if someone sent us ACK
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns SUCCESS if the Status bit was found or else ERR_MBX
+ **/
+int txgbe_check_for_ack(struct txgbe_hw *hw, u16 mbx_id)
+{
+ int err = TXGBE_ERR_MBX;
+
+ err = TCALL(hw, mbx.ops.check_for_ack, mbx_id);
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_rst - checks to see if other side has reset
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns SUCCESS if the Status bit was found or else ERR_MBX
+ **/
+int txgbe_check_for_rst(struct txgbe_hw *hw, u16 mbx_id)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+ int err = TXGBE_ERR_MBX;
+
+ if (mbx->ops.check_for_rst)
+ err = mbx->ops.check_for_rst(hw, mbx_id);
+
+ return err;
+}
+
+/**
+ * txgbe_poll_for_msg - Wait for message notification
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message notification
+ **/
+int txgbe_poll_for_msg(struct txgbe_hw *hw, u16 mbx_id)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+
+ if (!countdown || !mbx->ops.check_for_msg)
+ goto out;
+
+ while (countdown && TCALL(hw, mbx.ops.check_for_msg, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ udelay(mbx->udelay);
+ }
+
+ if (countdown == 0)
+ ERROR_REPORT2(TXGBE_ERROR_POLLING,
+ "Polling for VF%d mailbox message timedout", mbx_id);
+
+out:
+ return countdown ? 0 : TXGBE_ERR_MBX;
+}
+
+/**
+ * txgbe_poll_for_ack - Wait for message acknowledgement
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message acknowledgement
+ **/
+int txgbe_poll_for_ack(struct txgbe_hw *hw, u16 mbx_id)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+
+ if (!countdown || !mbx->ops.check_for_ack)
+ goto out;
+
+ while (countdown && TCALL(hw, mbx.ops.check_for_ack, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ udelay(mbx->udelay);
+ }
+
+ if (countdown == 0)
+ ERROR_REPORT2(TXGBE_ERROR_POLLING,
+ "Polling for VF%d mailbox ack timedout", mbx_id);
+
+out:
+ return countdown ? 0 : TXGBE_ERR_MBX;
+}
+
+int txgbe_check_for_bit_pf(struct txgbe_hw *hw, u32 mask, int index)
+{
+ u32 mbvficr = rd32(hw, TXGBE_MBVFICR(index));
+ int err = TXGBE_ERR_MBX;
+
+ if (mbvficr & mask) {
+ err = 0;
+ wr32(hw, TXGBE_MBVFICR(index), mask);
+ }
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_msg_pf - checks to see if the VF has sent mail
+ * @hw: pointer to the HW structure
+ * @vf: the VF index
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ **/
+int txgbe_check_for_msg_pf(struct txgbe_hw *hw, u16 vf)
+{
+ int err = TXGBE_ERR_MBX;
+ int index = TXGBE_MBVFICR_INDEX(vf);
+ u32 vf_bit = vf % 16;
+
+ if (!txgbe_check_for_bit_pf(hw, TXGBE_MBVFICR_VFREQ_VF1 << vf_bit,
+ index)) {
+ err = 0;
+ hw->mbx.stats.reqs++;
+ }
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_ack_pf - checks to see if the VF has ACKed
+ * @hw: pointer to the HW structure
+ * @vf: the VF index
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ **/
+int txgbe_check_for_ack_pf(struct txgbe_hw *hw, u16 vf)
+{
+ int err = TXGBE_ERR_MBX;
+ int index = TXGBE_MBVFICR_INDEX(vf);
+ u32 vf_bit = vf % 16;
+
+ if (!txgbe_check_for_bit_pf(hw, TXGBE_MBVFICR_VFACK_VF1 << vf_bit,
+ index)) {
+ err = 0;
+ hw->mbx.stats.acks++;
+ }
+
+ return err;
+}
+
+/**
+ * txgbe_check_for_rst_pf - checks to see if the VF has reset
+ * @hw: pointer to the HW structure
+ * @vf: the VF index
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ **/
+int txgbe_check_for_rst_pf(struct txgbe_hw *hw, u16 vf)
+{
+ u32 reg_offset = (vf < 32) ? 0 : 1;
+ u32 vf_shift = vf % 32;
+ u32 vflre = 0;
+ int err = TXGBE_ERR_MBX;
+
+ vflre = rd32(hw, TXGBE_VFLRE(reg_offset));
+
+ if (vflre & (1 << vf_shift)) {
+ err = 0;
+ wr32(hw, TXGBE_VFLREC(reg_offset), (1 << vf_shift));
+ hw->mbx.stats.rsts++;
+ }
+
+ return err;
+}
+
+/**
+ * txgbe_obtain_mbx_lock_pf - obtain mailbox lock
+ * @hw: pointer to the HW structure
+ * @vf: the VF index
+ *
+ * return SUCCESS if we obtained the mailbox lock
+ **/
+int txgbe_obtain_mbx_lock_pf(struct txgbe_hw *hw, u16 vf)
+{
+ int err = TXGBE_ERR_MBX;
+ u32 mailbox;
+
+ /* Take ownership of the buffer */
+ wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_PFU);
+
+ /* reserve mailbox for vf use */
+ mailbox = rd32(hw, TXGBE_PXMAILBOX(vf));
+ if (mailbox & TXGBE_PXMAILBOX_PFU)
+ err = 0;
+ else
+ ERROR_REPORT2(TXGBE_ERROR_POLLING,
+ "Failed to obtain mailbox lock for PF%d", vf);
+
+
+ return err;
+}
+
+/**
+ * txgbe_write_mbx_pf - Places a message in the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @vf: the VF index
+ *
+ * returns SUCCESS if it successfully copied message into the buffer
+ **/
+int txgbe_write_mbx_pf(struct txgbe_hw *hw, u32 *msg, u16 size,
+ u16 vf)
+{
+ int err;
+ u16 i;
+
+ /* lock the mailbox to prevent pf/vf race condition */
+ err = txgbe_obtain_mbx_lock_pf(hw, vf);
+ if (err)
+ goto out_no_write;
+
+ /* flush msg and acks as we are overwriting the message buffer */
+ txgbe_check_for_msg_pf(hw, vf);
+ txgbe_check_for_ack_pf(hw, vf);
+
+ /* copy the caller specified message to the mailbox memory buffer */
+ for (i = 0; i < size; i++)
+ wr32a(hw, TXGBE_PXMBMEM(vf), i, msg[i]);
+
+ /* Interrupt VF to tell it a message has been sent and release buffer*/
+ /* set mirrored mailbox flags */
+ wr32a(hw, TXGBE_PXMBMEM(vf), TXGBE_VXMAILBOX_SIZE, TXGBE_PXMAILBOX_STS);
+ wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_STS);
+
+ /* update stats */
+ hw->mbx.stats.msgs_tx++;
+
+out_no_write:
+ return err;
+
+}
+
+/**
+ * txgbe_read_mbx_pf - Read a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @vf: the VF index
+ *
+ * This function copies a message from the mailbox buffer to the caller's
+ * memory buffer. The presumption is that the caller knows that there was
+ * a message due to a VF request so no polling for message is needed.
+ **/
+int txgbe_read_mbx_pf(struct txgbe_hw *hw, u32 *msg, u16 size,
+ u16 vf)
+{
+ int err;
+ u16 i;
+
+ /* lock the mailbox to prevent pf/vf race condition */
+ err = txgbe_obtain_mbx_lock_pf(hw, vf);
+ if (err)
+ goto out_no_read;
+
+ /* copy the message to the mailbox memory buffer */
+ for (i = 0; i < size; i++)
+ msg[i] = rd32a(hw, TXGBE_PXMBMEM(vf), i);
+
+ /* Acknowledge the message and release buffer */
+ /* set mirrored mailbox flags */
+ wr32a(hw, TXGBE_PXMBMEM(vf), TXGBE_VXMAILBOX_SIZE, TXGBE_PXMAILBOX_ACK);
+ wr32(hw, TXGBE_PXMAILBOX(vf), TXGBE_PXMAILBOX_ACK);
+
+ /* update stats */
+ hw->mbx.stats.msgs_rx++;
+
+out_no_read:
+ return err;
+}
+
+/**
+ * txgbe_init_mbx_params_pf - set initial values for pf mailbox
+ * @hw: pointer to the HW structure
+ *
+ * Initializes the hw->mbx struct to correct values for pf mailbox
+ */
+void txgbe_init_mbx_params_pf(struct txgbe_hw *hw)
+{
+ struct txgbe_mbx_info *mbx = &hw->mbx;
+
+ mbx->timeout = 0;
+ mbx->udelay = 0;
+
+ mbx->size = TXGBE_VXMAILBOX_SIZE;
+
+ mbx->ops.read = txgbe_read_mbx_pf;
+ mbx->ops.write = txgbe_write_mbx_pf;
+ mbx->ops.check_for_msg = txgbe_check_for_msg_pf;
+ mbx->ops.check_for_ack = txgbe_check_for_ack_pf;
+ mbx->ops.check_for_rst = txgbe_check_for_rst_pf;
+
+ mbx->stats.msgs_tx = 0;
+ mbx->stats.msgs_rx = 0;
+ mbx->stats.reqs = 0;
+ mbx->stats.acks = 0;
+ mbx->stats.rsts = 0;
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h
new file mode 100644
index 0000000000000..e412a5e546e10
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mbx.h
@@ -0,0 +1,171 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_mbx.h, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#ifndef _TXGBE_MBX_H_
+#define _TXGBE_MBX_H_
+
+#define TXGBE_VXMAILBOX_SIZE (16 - 1)
+
+/**
+ * VF Registers
+ **/
+#define TXGBE_VXMAILBOX 0x00600
+#define TXGBE_VXMAILBOX_REQ ((0x1) << 0) /* Request for PF Ready bit */
+#define TXGBE_VXMAILBOX_ACK ((0x1) << 1) /* Ack PF message received */
+#define TXGBE_VXMAILBOX_VFU ((0x1) << 2) /* VF owns the mailbox buffer */
+#define TXGBE_VXMAILBOX_PFU ((0x1) << 3) /* PF owns the mailbox buffer */
+#define TXGBE_VXMAILBOX_PFSTS ((0x1) << 4) /* PF wrote a message in the MB */
+#define TXGBE_VXMAILBOX_PFACK ((0x1) << 5) /* PF ack the previous VF msg */
+#define TXGBE_VXMAILBOX_RSTI ((0x1) << 6) /* PF has reset indication */
+#define TXGBE_VXMAILBOX_RSTD ((0x1) << 7) /* PF has indicated reset done */
+#define TXGBE_VXMAILBOX_R2C_BITS (TXGBE_VXMAILBOX_RSTD | \
+ TXGBE_VXMAILBOX_PFSTS | TXGBE_VXMAILBOX_PFACK)
+
+#define TXGBE_VXMBMEM 0x00C00 /* 16*4B */
+
+/**
+ * PF Registers
+ **/
+#define TXGBE_PXMAILBOX(i) (0x00600 + (4 * (i))) /* i=[0,63] */
+#define TXGBE_PXMAILBOX_STS ((0x1) << 0) /* Initiate message send to VF */
+#define TXGBE_PXMAILBOX_ACK ((0x1) << 1) /* Ack message recv'd from VF */
+#define TXGBE_PXMAILBOX_VFU ((0x1) << 2) /* VF owns the mailbox buffer */
+#define TXGBE_PXMAILBOX_PFU ((0x1) << 3) /* PF owns the mailbox buffer */
+#define TXGBE_PXMAILBOX_RVFU ((0x1) << 4) /* Reset VFU - used when VF stuck*/
+
+#define TXGBE_PXMBMEM(i) (0x5000 + (64 * (i))) /* i=[0,63] */
+
+#define TXGBE_VFLRP(i) (0x00490 + (4 * (i))) /* i=[0,1] */
+#define TXGBE_VFLRE(i) (0x004A0 + (4 * (i))) /* i=[0,1] */
+#define TXGBE_VFLREC(i) (0x004A8 + (4 * (i))) /* i=[0,1] */
+
+/* SR-IOV specific macros */
+#define TXGBE_MBVFICR(i) (0x00480 + (4 * (i))) /* i=[0,3] */
+#define TXGBE_MBVFICR_INDEX(vf) ((vf) >> 4)
+#define TXGBE_MBVFICR_VFREQ_MASK (0x0000FFFF) /* bits for VF messages */
+#define TXGBE_MBVFICR_VFREQ_VF1 (0x00000001) /* bit for VF 1 message */
+#define TXGBE_MBVFICR_VFACK_MASK (0xFFFF0000) /* bits for VF acks */
+#define TXGBE_MBVFICR_VFACK_VF1 (0x00010000) /* bit for VF 1 ack */
+
+/**
+ * Messages
+ **/
+/* If it's a TXGBE_VF_* msg then it originates in the VF and is sent to the
+ * PF. The reverse is true if it is TXGBE_PF_*.
+ * Message ACK's are the value or'd with 0xF0000000
+ */
+#define TXGBE_VT_MSGTYPE_ACK 0x80000000 /* Messages below or'd with
+ * this are the ACK */
+#define TXGBE_VT_MSGTYPE_NACK 0x40000000 /* Messages below or'd with
+ * this are the NACK */
+#define TXGBE_VT_MSGTYPE_CTS 0x20000000 /* Indicates that VF is still
+ * clear to send requests */
+#define TXGBE_VT_MSGINFO_SHIFT 16
+/* bits 23:16 are used for extra info for certain messages */
+#define TXGBE_VT_MSGINFO_MASK (0xFF << TXGBE_VT_MSGINFO_SHIFT)
+
+/* definitions to support mailbox API version negotiation */
+
+/*
+ * each element denotes a version of the API; existing numbers may not
+ * change; any additions must go at the end
+ */
+enum txgbe_pfvf_api_rev {
+ txgbe_mbox_api_null,
+ txgbe_mbox_api_10, /* API version 1.0, linux/freebsd VF driver */
+ txgbe_mbox_api_11, /* API version 1.1, linux/freebsd VF driver */
+ txgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */
+ txgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */
+ txgbe_mbox_api_20, /* API version 2.0, solaris Phase1 VF driver */
+ txgbe_mbox_api_unknown, /* indicates that API version is not known */
+};
+
+/* mailbox API, legacy requests */
+#define TXGBE_VF_RESET 0x01 /* VF requests reset */
+#define TXGBE_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */
+#define TXGBE_VF_SET_MULTICAST 0x03 /* VF requests PF to set MC addr */
+#define TXGBE_VF_SET_VLAN 0x04 /* VF requests PF to set VLAN */
+
+/* mailbox API, version 1.0 VF requests */
+#define TXGBE_VF_SET_LPE 0x05 /* VF requests PF to set VMOLR.LPE */
+#define TXGBE_VF_SET_MACVLAN 0x06 /* VF requests PF for unicast filter */
+#define TXGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */
+
+/* mailbox API, version 1.1 VF requests */
+#define TXGBE_VF_GET_QUEUES 0x09 /* get queue configuration */
+
+/* mailbox API, version 1.2 VF requests */
+#define TXGBE_VF_GET_RETA 0x0a /* VF request for RETA */
+#define TXGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */
+#define TXGBE_VF_UPDATE_XCAST_MODE 0x0c
+#define TXGBE_VF_BACKUP 0x8001 /* VF requests backup */
+
+/* mode choices for IXGBE_VF_UPDATE_XCAST_MODE */
+enum txgbevf_xcast_modes {
+ TXGBEVF_XCAST_MODE_NONE = 0,
+ TXGBEVF_XCAST_MODE_MULTI,
+ TXGBEVF_XCAST_MODE_ALLMULTI,
+ TXGBEVF_XCAST_MODE_PROMISC,
+};
+
+/* GET_QUEUES return data indices within the mailbox */
+#define TXGBE_VF_TX_QUEUES 1 /* number of Tx queues supported */
+#define TXGBE_VF_RX_QUEUES 2 /* number of Rx queues supported */
+#define TXGBE_VF_TRANS_VLAN 3 /* Indication of port vlan */
+#define TXGBE_VF_DEF_QUEUE 4 /* Default queue offset */
+
+/* length of permanent address message returned from PF */
+#define TXGBE_VF_PERMADDR_MSG_LEN 4
+/* word in permanent address message with the current multicast type */
+#define TXGBE_VF_MC_TYPE_WORD 3
+
+#define TXGBE_PF_CONTROL_MSG 0x0100 /* PF control message */
+
+/* mailbox API, version 2.0 VF requests */
+#define TXGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */
+#define TXGBE_VF_GET_QUEUES 0x09 /* get queue configuration */
+#define TXGBE_VF_ENABLE_MACADDR 0x0A /* enable MAC address */
+#define TXGBE_VF_DISABLE_MACADDR 0x0B /* disable MAC address */
+#define TXGBE_VF_GET_MACADDRS 0x0C /* get all configured MAC addrs */
+#define TXGBE_VF_SET_MCAST_PROMISC 0x0D /* enable multicast promiscuous */
+#define TXGBE_VF_GET_MTU 0x0E /* get bounds on MTU */
+#define TXGBE_VF_SET_MTU 0x0F /* set a specific MTU */
+
+/* mailbox API, version 2.0 PF requests */
+#define TXGBE_PF_TRANSPARENT_VLAN 0x0101 /* enable transparent vlan */
+
+#define TXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */
+#define TXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */
+
+int txgbe_read_mbx(struct txgbe_hw *, u32 *, u16, u16);
+int txgbe_write_mbx(struct txgbe_hw *, u32 *, u16, u16);
+int txgbe_read_posted_mbx(struct txgbe_hw *, u32 *, u16, u16);
+int txgbe_write_posted_mbx(struct txgbe_hw *, u32 *, u16, u16);
+int txgbe_check_for_msg(struct txgbe_hw *, u16);
+int txgbe_check_for_ack(struct txgbe_hw *, u16);
+int txgbe_check_for_rst(struct txgbe_hw *, u16);
+void txgbe_init_mbx_ops(struct txgbe_hw *hw);
+void txgbe_init_mbx_params_vf(struct txgbe_hw *);
+void txgbe_init_mbx_params_pf(struct txgbe_hw *);
+
+#endif /* _TXGBE_MBX_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
new file mode 100644
index 0000000000000..5c29a28af0754
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
@@ -0,0 +1,1366 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+
+#include "txgbe.h"
+
+MTD_STATUS mtdHwXmdioWrite(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ IN MTD_U16 value)
+{
+ MTD_STATUS result = MTD_OK;
+
+ if (devPtr->fmtdWriteMdio != NULL) {
+ if (devPtr->fmtdWriteMdio(devPtr, port, dev, reg, value) == MTD_FAIL) {
+ result = MTD_FAIL;
+ MTD_DBG_INFO("fmtdWriteMdio 0x%04X failed to port=%d, dev=%d, reg=0x%04X\n",
+ (unsigned)(value), (unsigned)port, (unsigned)dev, (unsigned)reg);
+ }
+ } else
+ result = MTD_FAIL;
+
+ return result;
+}
+
+MTD_STATUS mtdHwXmdioRead(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ OUT MTD_U16 * data)
+{
+ MTD_STATUS result = MTD_OK;
+
+ if (devPtr->fmtdReadMdio != NULL) {
+ if (devPtr->fmtdReadMdio(devPtr, port, dev, reg, data) == MTD_FAIL) {
+ result = MTD_FAIL;
+ MTD_DBG_INFO("fmtdReadMdio failed from port=%d, dev=%d, reg=0x%04X\n",
+ (unsigned)port, (unsigned)dev, (unsigned)reg);
+ }
+ } else
+ result = MTD_FAIL;
+
+ return result;
+}
+
+/*
+ This macro calculates the mask for partial read/write of register's data.
+*/
+#define MTD_CALC_MASK(fieldOffset, fieldLen, mask) do {\
+ if ((fieldLen + fieldOffset) >= 16) \
+ mask = (0 - (1 << fieldOffset)); \
+ else \
+ mask = (((1 << (fieldLen + fieldOffset))) - (1 << fieldOffset));\
+ } while (0)
+
+MTD_STATUS mtdHwGetPhyRegField(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 * data)
+{
+ MTD_U16 tmpData;
+ MTD_STATUS retVal;
+
+ retVal = mtdHwXmdioRead(devPtr, port, dev, regAddr, &tmpData);
+
+ if (retVal != MTD_OK) {
+ MTD_DBG_ERROR("Failed to read register \n");
+ return MTD_FAIL;
+ }
+
+ mtdHwGetRegFieldFromWord(tmpData, fieldOffset, fieldLength, data);
+
+ MTD_DBG_INFO("fOff %d, fLen %d, data 0x%04X.\n", (int)fieldOffset,
+ (int)fieldLength, (int)*data);
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdHwSetPhyRegField(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ IN MTD_U16 data)
+{
+ MTD_U16 tmpData, newData;
+ MTD_STATUS retVal;
+
+ retVal = mtdHwXmdioRead(devPtr, port, dev, regAddr, &tmpData);
+ if (retVal != MTD_OK) {
+ MTD_DBG_ERROR("Failed to read register \n");
+ return MTD_FAIL;
+ }
+
+ mtdHwSetRegFieldToWord(tmpData, data, fieldOffset, fieldLength, &newData);
+
+ retVal = mtdHwXmdioWrite(devPtr, port, dev, regAddr, newData);
+
+ if (retVal != MTD_OK) {
+ MTD_DBG_ERROR("Failed to write register \n");
+ return MTD_FAIL;
+ }
+
+ MTD_DBG_INFO("fieldOff %d, fieldLen %d, data 0x%x.\n", fieldOffset,
+ fieldLength, data);
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdHwGetRegFieldFromWord(
+ IN MTD_U16 regData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data)
+{
+ /* Bits mask to be read */
+ MTD_U16 mask;
+
+ MTD_CALC_MASK(fieldOffset, fieldLength, mask);
+
+ *data = (regData & mask) >> fieldOffset;
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdHwSetRegFieldToWord(
+ IN MTD_U16 regData,
+ IN MTD_U16 bitFieldData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data)
+{
+ /* Bits mask to be read */
+ MTD_U16 mask;
+
+ MTD_CALC_MASK(fieldOffset, fieldLength, mask);
+
+ /* Set the desired bits to 0. */
+ regData &= ~mask;
+ /* Set the given data into the above reset bits.*/
+ regData |= ((bitFieldData << fieldOffset) & mask);
+
+ *data = regData;
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdWait(IN MTD_UINT x)
+{
+ msleep(x);
+ return MTD_OK;
+}
+
+/* internal device registers */
+MTD_STATUS mtdCheckDeviceCapabilities(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL * phyHasMacsec,
+ OUT MTD_BOOL * phyHasCopperInterface,
+ OUT MTD_BOOL * isE20X0Device)
+{
+ MTD_U8 major, minor, inc, test;
+ MTD_U16 abilities;
+
+ *phyHasMacsec = MTD_TRUE;
+ *phyHasCopperInterface = MTD_TRUE;
+ *isE20X0Device = MTD_FALSE;
+
+ if (mtdGetFirmwareVersion(devPtr, port, &major, &minor, &inc, &test) == MTD_FAIL) {
+ /* firmware not running will produce this case */
+ major = minor = inc = test = 0;
+ }
+
+ if (major == 0 && minor == 0 && inc == 0 && test == 0) {
+ /* no code loaded into internal processor */
+ /* have to read it from the device itself the hard way */
+ MTD_U16 reg2, reg3;
+ MTD_U16 index, index2;
+ MTD_U16 temp;
+ MTD_U16 bit16thru23[8];
+
+ /* save these registers */
+ /* ATTEMPT(mtdHwXmdioRead(devPtr,port,MTD_REG_CCCR9,®1)); some revs can't read this register reliably */
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F0, ®2));
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, ®3));
+
+ /* clear these bit indications */
+ for (index = 0; index < 8; index++) {
+ bit16thru23[index] = 0;
+ }
+
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF05E, 0x0300)); /* force clock on */
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F0, 0x0102)); /* set access */
+ mtdWait(1);
+
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x06D3)); /* sequence needed */
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0593));
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513));
+ mtdWait(1);
+
+ index = 0;
+ index2 = 0;
+ while (index < 24) {
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413));
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513));
+ mtdWait(1);
+
+ if (index >= 16) {
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, &bit16thru23[index2++]));
+ } else {
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 31, 0xF0F5, &temp));
+ }
+ mtdWait(1);
+ index++;
+ }
+
+ if (((bit16thru23[0] >> 11) & 1) | ((bit16thru23[1] >> 11) & 1)) {
+ *phyHasMacsec = MTD_FALSE;
+ }
+ if (((bit16thru23[4] >> 11) & 1) | ((bit16thru23[5] >> 11) & 1)) {
+ *phyHasCopperInterface = MTD_FALSE;
+ }
+
+ if (((bit16thru23[6] >> 11) & 1) | ((bit16thru23[7] >> 11) & 1)) {
+ *isE20X0Device = MTD_TRUE;
+ }
+
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413));
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0493));
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0413));
+ mtdWait(1);
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, 0x0513));
+ mtdWait(1);
+
+ /* restore the registers */
+ /* ATTEMPT(mtdHwXmdioWrite(devPtr,port,MTD_REG_CCCR9,reg1)); Some revs can't read this register reliably */
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF05E, 0x5440)); /* set back to reset value */
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F0, reg2));
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 31, 0xF0F5, reg3));
+
+ } else {
+ /* should just read it from the firmware status register */
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_XG_EXT_STATUS, &abilities));
+ if (abilities & (1 << 12)) {
+ *phyHasMacsec = MTD_FALSE;
+ }
+
+ if (abilities & (1 << 13)) {
+ *phyHasCopperInterface = MTD_FALSE;
+ }
+
+ if (abilities & (1 << 14)) {
+ *isE20X0Device = MTD_TRUE;
+ }
+
+ }
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdIsPhyReadyAfterReset(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL * phyReady)
+{
+ MTD_U16 val;
+
+ *phyReady = MTD_FALSE;
+
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 15, 1, &val));
+
+ if (val) {
+ /* if still in reset return '0' (could be coming up, or disabled by download mode) */
+ *phyReady = MTD_FALSE;
+ } else {
+ /* if Phy is in normal operation */
+ *phyReady = MTD_TRUE;
+ }
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdSoftwareReset(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 timeoutMs)
+{
+ MTD_U16 counter;
+ MTD_BOOL phyReady;
+ /* bit self clears when done */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 15, 1, 1));
+
+ if (timeoutMs) {
+ counter = 0;
+ ATTEMPT(mtdIsPhyReadyAfterReset(devPtr, port, &phyReady));
+ while (phyReady == MTD_FALSE && counter <= timeoutMs) {
+ ATTEMPT(mtdWait(1));
+ ATTEMPT(mtdIsPhyReadyAfterReset(devPtr, port, &phyReady));
+ counter++;
+ }
+
+ if (counter < timeoutMs) {
+ return MTD_OK;
+ } else {
+ /* timed out without becoming ready */
+ return MTD_FAIL;
+ }
+ } else {
+ return MTD_OK;
+ }
+}
+
+MTD_STATUS mtdIsPhyReadyAfterHardwareReset(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *phyReady)
+{
+ MTD_U16 val;
+
+ *phyReady = MTD_FALSE;
+
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 14, 1, &val));
+
+ if (val) {
+ /* if still in reset return '0' (could be coming up, or disabled by download mode) */
+ *phyReady = MTD_FALSE;
+ } else {
+ /* if Phy is in normal operation */
+ *phyReady = MTD_TRUE;
+ }
+ return MTD_OK;
+}
+
+MTD_STATUS mtdHardwareReset(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 timeoutMs)
+{
+ MTD_U16 counter;
+ MTD_BOOL phyReady;
+
+ /* bit self clears when done */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 14, 1, 1));
+
+ if (timeoutMs) {
+ counter = 0;
+ ATTEMPT(mtdIsPhyReadyAfterHardwareReset(devPtr, port, &phyReady));
+ while (phyReady == MTD_FALSE && counter <= timeoutMs) {
+ ATTEMPT(mtdWait(1));
+ ATTEMPT(mtdIsPhyReadyAfterHardwareReset(devPtr, port, &phyReady));
+ counter++;
+ }
+ if (counter < timeoutMs)
+ return MTD_OK;
+ else
+ return MTD_FAIL; /* timed out without becoming ready */
+ } else {
+ return MTD_OK;
+ }
+}
+
+/****************************************************************************/
+
+/****************************************************************************/
+/*******************************************************************
+ 802.3 Clause 28 and Clause 45
+ Autoneg Related Control & Status
+ *******************************************************************/
+/*******************************************************************
+ Enabling speeds for autonegotiation
+ Reading speeds enabled for autonegotation
+ Set/get pause advertisement for autonegotiation
+ Other Autoneg-related Control and Status (restart,disable/enable,
+ force master/slave/auto, checking for autoneg resolution, etc.)
+ *******************************************************************/
+
+#define MTD_7_0010_SPEED_BIT_LENGTH 4
+#define MTD_7_0010_SPEED_BIT_POS 5
+#define MTD_7_8000_SPEED_BIT_LENGTH 2
+#define MTD_7_8000_SPEED_BIT_POS 8
+#define MTD_7_0020_SPEED_BIT_LENGTH 1 /* for 88X32X0 family and 88X33X0 family */
+#define MTD_7_0020_SPEED_BIT_POS 12
+#define MTD_7_0020_SPEED_BIT_LENGTH2 2 /* for 88X33X0 family A0 revision 2.5/5G */
+#define MTD_7_0020_SPEED_BIT_POS2 7
+
+/* Bit defines for speed bits */
+#define MTD_FORCED_SPEEDS_BIT_MASK (MTD_SPEED_10M_HD_AN_DIS | MTD_SPEED_10M_FD_AN_DIS | \
+ MTD_SPEED_100M_HD_AN_DIS | MTD_SPEED_100M_FD_AN_DIS)
+#define MTD_LOWER_BITS_MASK 0x000F /* bits in base page */
+#define MTD_GIG_SPEED_POS 4
+#define MTD_XGIG_SPEED_POS 6
+#define MTD_2P5G_SPEED_POS 11
+#define MTD_5G_SPEED_POS 12
+#define MTD_GET_1000BT_BITS(_speedBits) ((_speedBits & (MTD_SPEED_1GIG_HD | MTD_SPEED_1GIG_FD)) \
+ >> MTD_GIG_SPEED_POS) /* 1000BT bits */
+#define MTD_GET_10GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_10GIG_FD) \
+ >> MTD_XGIG_SPEED_POS) /* 10GBT bit setting */
+#define MTD_GET_2P5GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_2P5GIG_FD) \
+ >> MTD_2P5G_SPEED_POS) /* 2.5GBT bit setting */
+#define MTD_GET_5GBT_BIT(_speedBits) ((_speedBits & MTD_SPEED_5GIG_FD) \
+ >> MTD_5G_SPEED_POS) /* 5GBT bit setting */
+
+MTD_STATUS mtdEnableSpeeds(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 speed_bits,
+ IN MTD_BOOL anRestart)
+{
+ MTD_BOOL speedForced;
+ MTD_U16 dummy;
+ MTD_U16 tempRegValue;
+
+ if (speed_bits & MTD_FORCED_SPEEDS_BIT_MASK) {
+ /* tried to force the speed, this function is for autonegotiation control */
+ return MTD_FAIL;
+ }
+
+ if (MTD_IS_X32X0_BASE(devPtr->deviceId) && ((speed_bits & MTD_SPEED_2P5GIG_FD) ||
+ (speed_bits & MTD_SPEED_5GIG_FD))) {
+ return MTD_FAIL; /* tried to advertise 2.5G/5G on a 88X32X0 chipset */
+ }
+
+ if (MTD_IS_X33X0_BASE(devPtr->deviceId)) {
+ const MTD_U16 chipRev = (devPtr->deviceId & 0xf); /* get the chip revision */
+
+ if (chipRev == 9 || chipRev == 5 || chipRev == 1 || /* Z2 chip revisions */
+ chipRev == 8 || chipRev == 4 || chipRev == 0) /* Z1 chip revisions */ {
+ /* this is an X33X0 or E20X0 Z2/Z1 device and not supported (not compatible with A0) */
+ return MTD_FAIL;
+ }
+ }
+
+ /* Enable AN and set speed back to power-on default in case previously forced
+ Only do it if forced, to avoid an extra/unnecessary soft reset */
+ ATTEMPT(mtdGetForcedSpeed(devPtr, port, &speedForced, &dummy));
+ if (speedForced) {
+ ATTEMPT(mtdUndoForcedSpeed(devPtr, port, MTD_FALSE));
+ }
+
+ if (speed_bits == MTD_ADV_NONE) {
+ /* Set all speeds to be disabled
+ Take care of bits in 7.0010 (advertisement register, 10BT and 100BT bits) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010,\
+ MTD_7_0010_SPEED_BIT_POS, MTD_7_0010_SPEED_BIT_LENGTH, \
+ 0));
+
+ /* Take care of speed bits in 7.8000 (1000BASE-T speed bits) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x8000,\
+ MTD_7_8000_SPEED_BIT_POS, MTD_7_8000_SPEED_BIT_LENGTH, \
+ 0));
+
+ /* Now take care of bit in 7.0020 (10GBASE-T) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0020,\
+ MTD_7_0020_SPEED_BIT_POS, MTD_7_0020_SPEED_BIT_LENGTH, 0));
+
+ if (MTD_IS_X33X0_BASE(devPtr->deviceId)) {
+ /* Now take care of bits in 7.0020 (2.5G, 5G speed bits) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0020,\
+ MTD_7_0020_SPEED_BIT_POS2, MTD_7_0020_SPEED_BIT_LENGTH2, 0));
+ }
+ } else {
+ /* Take care of bits in 7.0010 (advertisement register, 10BT and 100BT bits) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010,\
+ MTD_7_0010_SPEED_BIT_POS, MTD_7_0010_SPEED_BIT_LENGTH, \
+ (speed_bits & MTD_LOWER_BITS_MASK)));
+
+ /* Take care of speed bits in 7.8000 (1000BASE-T speed bits) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x8000,\
+ MTD_7_8000_SPEED_BIT_POS, MTD_7_8000_SPEED_BIT_LENGTH, \
+ MTD_GET_1000BT_BITS(speed_bits)));
+
+
+ /* Now take care of bits in 7.0020 (10GBASE-T first) */
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 7, 0x0020, &tempRegValue));
+ ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_10GBT_BIT(speed_bits),\
+ MTD_7_0020_SPEED_BIT_POS, MTD_7_0020_SPEED_BIT_LENGTH, \
+ &tempRegValue));
+
+ if (MTD_IS_X33X0_BASE(devPtr->deviceId)) {
+ /* Now take care of 2.5G bit in 7.0020 */
+ ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_2P5GBT_BIT(speed_bits),\
+ 7, 1, \
+ &tempRegValue));
+
+ /* Now take care of 5G bit in 7.0020 */
+ ATTEMPT(mtdHwSetRegFieldToWord(tempRegValue, MTD_GET_5GBT_BIT(speed_bits),\
+ 8, 1, \
+ &tempRegValue));
+ }
+
+ /* Now write result back to 7.0020 */
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 7, 0x0020, tempRegValue));
+
+ if (MTD_GET_10GBT_BIT(speed_bits) ||
+ MTD_GET_2P5GBT_BIT(speed_bits) ||
+ MTD_GET_5GBT_BIT(speed_bits)) {
+ /* Set XNP on if any bit that required it was set */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0, 13, 1, 1));
+ }
+ }
+
+ if (anRestart) {
+ return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) ||
+ mtdAutonegRestart(devPtr, port)));
+ }
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdUndoForcedSpeed(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_BOOL anRestart)
+{
+
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 13, 1, 1));
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 6, 1, 1));
+
+ /* when speed bits are changed, T unit sw reset is required, wait until phy is ready */
+ ATTEMPT(mtdSoftwareReset(devPtr, port, 1000));
+
+ if (anRestart) {
+ return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) ||
+ mtdAutonegRestart(devPtr, port)));
+ }
+
+ return MTD_OK;
+}
+
+
+MTD_STATUS mtdGetForcedSpeed(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *speedIsForced,
+ OUT MTD_U16 *forcedSpeed)
+{
+ MTD_U16 val, bit0, bit1, forcedSpeedBits, duplexBit;
+ MTD_BOOL anDisabled;
+
+ *speedIsForced = MTD_FALSE;
+ *forcedSpeed = MTD_ADV_NONE;
+
+ /* check if 7.0.12 is 0 or 1 (disabled or enabled) */
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 7, 0, 12, 1, &val));
+
+ (val) ? (anDisabled = MTD_FALSE) : (anDisabled = MTD_TRUE);
+
+ if (anDisabled) {
+ /* autoneg is disabled, see if it's forced to one of the speeds that work without AN */
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 6, 1, &bit0));
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_IEEE_PMA_CTRL1, 13, 1, &bit1));
+
+ /* now read the duplex bit setting */
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 7, 0x8000, 4, 1, &duplexBit));
+
+ forcedSpeedBits = 0;
+ forcedSpeedBits = bit0 | (bit1 << 1);
+
+ if (forcedSpeedBits == 0) {
+ /* it's set to 10BT */
+ if (duplexBit) {
+ *speedIsForced = MTD_TRUE;
+ *forcedSpeed = MTD_SPEED_10M_FD_AN_DIS;
+ } else {
+ *speedIsForced = MTD_TRUE;
+ *forcedSpeed = MTD_SPEED_10M_HD_AN_DIS;
+ }
+ } else if (forcedSpeedBits == 2) {
+ /* it's set to 100BT */
+ if (duplexBit) {
+ *speedIsForced = MTD_TRUE;
+ *forcedSpeed = MTD_SPEED_100M_FD_AN_DIS;
+ } else {
+ *speedIsForced = MTD_TRUE;
+ *forcedSpeed = MTD_SPEED_100M_HD_AN_DIS;
+ }
+ }
+ /* else it's set to 1000BT or 10GBT which require AN to work */
+ }
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdAutonegRestart(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port)
+{
+ /* set 7.0.9, restart AN */
+ return (mtdHwSetPhyRegField(devPtr, port, 7, 0,
+ 9, 1, 1));
+}
+
+
+MTD_STATUS mtdAutonegEnable(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port)
+{
+ /* set 7.0.12=1, enable AN */
+ return (mtdHwSetPhyRegField(devPtr, port, 7, 0,
+ 12, 1, 1));
+}
+
+/******************************************************************************
+ MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *anSpeedResolutionDone
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ anSpeedResolutionDone - one of the following
+ MTD_TRUE if speed/duplex is resolved
+ MTD_FALSE if speed/duplex is not resolved
+
+ Returns:
+ MTD_OK or MTD_FAIL, if query was successful or not
+
+ Description:
+ Queries register 3.8008.11 Speed/Duplex resolved to see if autonegotiation
+ is resolved or in progress. See note below. This function is only to be
+ called if autonegotation is enabled and speed is not forced.
+
+ anSpeedResolutionDone being MTD_TRUE, only indicates if AN has determined
+ the speed and duplex bits in 3.8008, which will indicate what registers
+ to read later for AN resolution after AN has completed.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ If autonegotiation is disabled or speed is forced, this function returns
+ MTD_TRUE.
+
+******************************************************************************/
+MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *anSpeedResolutionDone)
+{
+ MTD_U16 val;
+
+ /* read speed/duplex resolution done bit in 3.8008 bit 11 */
+ if (mtdHwGetPhyRegField(devPtr, port,
+ 3, 0x8008, 11, 1, &val) == MTD_FAIL) {
+ *anSpeedResolutionDone = MTD_FALSE;
+ return MTD_FAIL;
+ }
+
+ (val) ? (*anSpeedResolutionDone = MTD_TRUE) : (*anSpeedResolutionDone = MTD_FALSE);
+
+ return MTD_OK;
+}
+
+
+MTD_STATUS mtdGetAutonegSpeedDuplexResolution(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U16 *speedResolution)
+{
+ MTD_U16 val, speed, speed2, duplex;
+ MTD_BOOL resDone;
+
+ *speedResolution = MTD_ADV_NONE;
+
+ /* check if AN is enabled */
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \
+ 7, 0, 12, 1, &val));
+
+ if (val) {
+ /* an is enabled, check if speed is resolved */
+ ATTEMPT(mtdAutonegIsSpeedDuplexResolutionDone(devPtr, port, &resDone));
+
+ if (resDone) {
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \
+ 3, 0x8008, 14, 2, &speed));
+
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \
+ 3, 0x8008, 13, 1, &duplex));
+
+ switch (speed) {
+ case MTD_CU_SPEED_10_MBPS:
+ if (duplex) {
+ *speedResolution = MTD_SPEED_10M_FD;
+ } else {
+ *speedResolution = MTD_SPEED_10M_HD;
+ }
+ break;
+ case MTD_CU_SPEED_100_MBPS:
+ if (duplex) {
+ *speedResolution = MTD_SPEED_100M_FD;
+ } else {
+ *speedResolution = MTD_SPEED_100M_HD;
+ }
+ break;
+ case MTD_CU_SPEED_1000_MBPS:
+ if (duplex) {
+ *speedResolution = MTD_SPEED_1GIG_FD;
+ } else {
+ *speedResolution = MTD_SPEED_1GIG_HD;
+ }
+ break;
+ case MTD_CU_SPEED_10_GBPS: /* also MTD_CU_SPEED_NBT */
+ if (MTD_IS_X32X0_BASE(devPtr->deviceId)) {
+ *speedResolution = MTD_SPEED_10GIG_FD; /* 10G has only full duplex, ignore duplex bit */
+ } else {
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, \
+ 3, 0x8008, 2, 2, &speed2));
+
+ switch (speed2) {
+ case MTD_CU_SPEED_NBT_10G:
+ *speedResolution = MTD_SPEED_10GIG_FD;
+ break;
+
+ case MTD_CU_SPEED_NBT_5G:
+ *speedResolution = MTD_SPEED_5GIG_FD;
+ break;
+
+ case MTD_CU_SPEED_NBT_2P5G:
+ *speedResolution = MTD_SPEED_2P5GIG_FD;
+ break;
+
+ default:
+ /* this is an error */
+ return MTD_FAIL;
+ break;
+ }
+ }
+ break;
+ default:
+ /* this is an error */
+ return MTD_FAIL;
+ break;
+ }
+
+ }
+
+ }
+
+ return MTD_OK;
+}
+
+MTD_STATUS mtdSetPauseAdvertisement(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U32 pauseType,
+ IN MTD_BOOL anRestart)
+{
+ /* sets/clears bits 11, 10 (A6,A5 in the tech bit field of 7.16) */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 7, 0x0010, \
+ 10, 2, (MTD_U16)pauseType));
+
+ if (anRestart) {
+ return ((MTD_STATUS)(mtdAutonegEnable(devPtr, port) ||
+ mtdAutonegRestart(devPtr, port)));
+ }
+
+ return MTD_OK;
+}
+
+
+/******************************************************************************
+ MTD_STATUS mtdAutonegIsCompleted
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *anStatusReady
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ anStatusReady - one of the following
+ MTD_TRUE if AN status registers are available to be read (7.1, 7.33, 7.32769, etc.)
+ MTD_FALSE if AN is not completed and AN status registers may contain old data
+
+ Returns:
+ MTD_OK or MTD_FAIL, if query was successful or not
+
+ Description:
+ Checks 7.1.5 for 1. If 1, returns MTD_TRUE. If not, returns MTD_FALSE. Many
+ autonegotiation status registers are not valid unless AN has completed
+ meaning 7.1.5 = 1.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ Call this function before reading 7.33 or 7.32769 to check for master/slave
+ resolution or other negotiated parameters which are negotiated during
+ autonegotiation like fast retrain, fast retrain type, etc.
+
+******************************************************************************/
+MTD_STATUS mtdAutonegIsCompleted(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *anStatusReady)
+{
+ MTD_U16 val;
+
+ /* read an completed, 7.1.5 bit */
+ if (mtdHwGetPhyRegField(devPtr, port,
+ 7, 1, 5, 1, &val) == MTD_FAIL) {
+ *anStatusReady = MTD_FALSE;
+ return MTD_FAIL;
+ }
+
+ (val) ? (*anStatusReady = MTD_TRUE) : (*anStatusReady = MTD_FALSE);
+
+ return MTD_OK;
+}
+
+
+MTD_STATUS mtdGetLPAdvertisedPause(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *pauseBits)
+{
+ MTD_U16 val;
+ MTD_BOOL anStatusReady;
+
+ /* Make sure AN is complete */
+ ATTEMPT(mtdAutonegIsCompleted(devPtr, port, &anStatusReady));
+
+ if (anStatusReady == MTD_FALSE) {
+ *pauseBits = MTD_CLEAR_PAUSE;
+ return MTD_FAIL;
+ }
+
+ /* get bits 11, 10 (A6,A5 in the tech bit field of 7.19) */
+ if (mtdHwGetPhyRegField(devPtr, port, 7, 19,
+ 10, 2, &val) == MTD_FAIL) {
+ *pauseBits = MTD_CLEAR_PAUSE;
+ return MTD_FAIL;
+ }
+
+ *pauseBits = (MTD_U8)val;
+
+ return MTD_OK;
+}
+
+/*******************************************************************
+ Firmware Version
+ *******************************************************************/
+/****************************************************************************/
+MTD_STATUS mtdGetFirmwareVersion(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *major,
+ OUT MTD_U8 *minor,
+ OUT MTD_U8 *inc,
+ OUT MTD_U8 *test)
+{
+ MTD_U16 reg_49169, reg_49170;
+
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 1, 49169, ®_49169));
+
+ *major = (reg_49169 & 0xFF00) >> 8;
+ *minor = (reg_49169 & 0x00FF);
+
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 1, 49170, ®_49170));
+
+ *inc = (reg_49170 & 0xFF00) >> 8;
+ *test = (reg_49170 & 0x00FF);
+
+ /* firmware is not running if all 0's */
+ if (!(*major || *minor || *inc || *test)) {
+ return MTD_FAIL;
+ }
+ return MTD_OK;
+}
+
+
+MTD_STATUS mtdGetPhyRevision(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_DEVICE_ID * phyRev,
+ OUT MTD_U8 *numPorts,
+ OUT MTD_U8 *thisPort)
+{
+ MTD_U16 temp = 0, tryCounter, temp2, baseType, reportedHwRev;
+ MTD_U16 revision = 0, numports, thisport, readyBit, fwNumports, fwThisport;
+ MTD_BOOL registerExists, regReady, hasMacsec, hasCopper, isE20X0Device;
+ MTD_U8 major, minor, inc, test;
+
+ *phyRev = MTD_REV_UNKNOWN; /* in case we have any failed ATTEMPT below, will return unknown */
+ *numPorts = 0;
+ *thisPort = 0;
+
+ /* first check base type of device, get reported rev and port info */
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0xD00D, &temp));
+ baseType = ((temp & 0xFC00) >> 6);
+ reportedHwRev = (temp & 0x000F);
+ numports = ((temp & 0x0380) >> 7) + 1;
+ thisport = ((temp & 0x0070) >> 4);
+
+ /* find out if device has macsec/ptp, copper unit or is an E20X0-type device */
+ ATTEMPT(mtdCheckDeviceCapabilities(devPtr, port, &hasMacsec, &hasCopper, &isE20X0Device));
+
+ /* check if internal processor firmware is up and running, and if so, easier to get info */
+ if (mtdGetFirmwareVersion(devPtr, port, &major, &minor, &inc, &test) == MTD_FAIL) {
+ major = minor = inc = test = 0; /* this is expected if firmware is not loaded/running */
+ }
+
+ if (major == 0 && minor == 0 && inc == 0 && test == 0) {
+ /* no firmware running, have to verify device revision */
+ if (MTD_IS_X32X0_BASE(baseType)) {
+ /* A0 and Z2 report the same revision, need to check which is which */
+ if (reportedHwRev == 1) {
+ /* need to figure out if it's A0 or Z2 */
+ /* remove internal reset */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 3, 0xD801, 5, 1, 1));
+
+ /* wait until it's ready */
+ regReady = MTD_FALSE;
+ tryCounter = 0;
+ while (regReady == MTD_FALSE && tryCounter++ < 10) {
+ ATTEMPT(mtdWait(1)); /* timeout is set to 10 ms */
+ ATTEMPT(mtdHwGetPhyRegField(devPtr, port, 3, 0xD007, 6, 1, &readyBit));
+ if (readyBit == 1) {
+ regReady = MTD_TRUE;
+ }
+ }
+
+ if (regReady == MTD_FALSE) {
+ /* timed out, can't tell for sure what rev this is */
+ *numPorts = 0;
+ *thisPort = 0;
+ *phyRev = MTD_REV_UNKNOWN;
+ return MTD_FAIL;
+ }
+
+ /* perform test */
+ registerExists = MTD_FALSE;
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0x8EC6, &temp));
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, 3, 0x8EC6, 0xA5A5));
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, 3, 0x8EC6, &temp2));
+
+ /* put back internal reset */
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 3, 0xD801, 5, 1, 0));
+
+ if (temp == 0 && temp2 == 0xA5A5) {
+ registerExists = MTD_TRUE;
+ }
+
+ if (registerExists == MTD_TRUE) {
+ revision = 2; /* this is actually QA0 */
+ } else {
+ revision = reportedHwRev; /* this is a QZ2 */
+ }
+
+ } else {
+ /* it's not A0 or Z2, use what's reported by the hardware */
+ revision = reportedHwRev;
+ }
+ } else if (MTD_IS_X33X0_BASE(baseType)) {
+ /* all 33X0 devices report correct revision */
+ revision = reportedHwRev;
+ }
+
+ /* have to use what's reported by the hardware */
+ *numPorts = (MTD_U8)numports;
+ *thisPort = (MTD_U8)thisport;
+ } else {
+ /* there is firmware loaded/running in internal processor */
+ /* can get device revision reported by firmware */
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_PMA_PMD, MTD_TUNIT_PHY_REV_INFO_REG, &temp));
+ ATTEMPT(mtdHwGetRegFieldFromWord(temp, 0, 4, &revision));
+ ATTEMPT(mtdHwGetRegFieldFromWord(temp, 4, 3, &fwNumports));
+ ATTEMPT(mtdHwGetRegFieldFromWord(temp, 7, 3, &fwThisport));
+ if (fwNumports == numports && fwThisport == thisport) {
+ *numPorts = (MTD_U8)numports;
+ *thisPort = (MTD_U8)thisport;
+ } else {
+ *phyRev = MTD_REV_UNKNOWN;
+ *numPorts = 0;
+ *thisPort = 0;
+ return MTD_FAIL; /* firmware and hardware are reporting different values */
+ }
+ }
+
+ /* now have correct information to build up the MTD_DEVICE_ID */
+ if (MTD_IS_X32X0_BASE(baseType)) {
+ temp = MTD_X32X0_BASE;
+ } else if (MTD_IS_X33X0_BASE(baseType)) {
+ temp = MTD_X33X0_BASE;
+ } else {
+ *phyRev = MTD_REV_UNKNOWN;
+ *numPorts = 0;
+ *thisPort = 0;
+ return MTD_FAIL;
+ }
+
+ if (hasMacsec) {
+ temp |= MTD_MACSEC_CAPABLE;
+ }
+
+ if (hasCopper) {
+ temp |= MTD_COPPER_CAPABLE;
+ }
+
+ if (MTD_IS_X33X0_BASE(baseType) && isE20X0Device) {
+ temp |= MTD_E20X0_DEVICE;
+ }
+
+ temp |= (revision & 0xF);
+
+ *phyRev = (MTD_DEVICE_ID)temp;
+
+ /* make sure we got a good one */
+ if (mtdIsPhyRevisionValid(*phyRev) == MTD_OK) {
+ return MTD_OK;
+ } else {
+ return MTD_FAIL; /* unknown or unsupported, if recognized but unsupported, value is still valid */
+ }
+}
+
+MTD_STATUS mtdIsPhyRevisionValid(IN MTD_DEVICE_ID phyRev)
+{
+ switch (phyRev) {
+ /* list must match MTD_DEVICE_ID */
+ case MTD_REV_3240P_Z2:
+ case MTD_REV_3240P_A0:
+ case MTD_REV_3240P_A1:
+ case MTD_REV_3220P_Z2:
+ case MTD_REV_3220P_A0:
+
+ case MTD_REV_3240_Z2:
+ case MTD_REV_3240_A0:
+ case MTD_REV_3240_A1:
+ case MTD_REV_3220_Z2:
+ case MTD_REV_3220_A0:
+
+ case MTD_REV_3310P_A0:
+ case MTD_REV_3320P_A0:
+ case MTD_REV_3340P_A0:
+ case MTD_REV_3310_A0:
+ case MTD_REV_3320_A0:
+ case MTD_REV_3340_A0:
+
+ case MTD_REV_E2010P_A0:
+ case MTD_REV_E2020P_A0:
+ case MTD_REV_E2040P_A0:
+ case MTD_REV_E2010_A0:
+ case MTD_REV_E2020_A0:
+ case MTD_REV_E2040_A0:
+
+ case MTD_REV_2340P_A1:
+ case MTD_REV_2320P_A0:
+ case MTD_REV_2340_A1:
+ case MTD_REV_2320_A0:
+ return MTD_OK;
+ break;
+
+ /* unsupported PHYs */
+ case MTD_REV_3310P_Z1:
+ case MTD_REV_3320P_Z1:
+ case MTD_REV_3340P_Z1:
+ case MTD_REV_3310_Z1:
+ case MTD_REV_3320_Z1:
+ case MTD_REV_3340_Z1:
+
+ case MTD_REV_3310P_Z2:
+ case MTD_REV_3320P_Z2:
+ case MTD_REV_3340P_Z2:
+ case MTD_REV_3310_Z2:
+ case MTD_REV_3320_Z2:
+ case MTD_REV_3340_Z2:
+
+
+ case MTD_REV_E2010P_Z2:
+ case MTD_REV_E2020P_Z2:
+ case MTD_REV_E2040P_Z2:
+ case MTD_REV_E2010_Z2:
+ case MTD_REV_E2020_Z2:
+ case MTD_REV_E2040_Z2:
+ default:
+ return MTD_FAIL; /* is either MTD_REV_UNKNOWN or not in the above list */
+ break;
+ }
+}
+
+/* mtdCunit.c */
+MTD_STATUS mtdCunitSwReset(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port)
+{
+ return mtdHwSetPhyRegField(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, 15, 1, 1);
+}
+
+/* mtdHxunit.c */
+MTD_STATUS mtdRerunSerdesAutoInitializationUseAutoMode(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port)
+{
+ MTD_U16 temp, temp2, temp3;
+ MTD_U16 waitCounter;
+
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp));
+
+ ATTEMPT(mtdHwSetRegFieldToWord(temp, 3, 14, 2, &temp2)); /* execute bits and disable bits set */
+
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, temp2));
+
+ /* wait for it to be done */
+ waitCounter = 0;
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp3));
+ while ((temp3 & 0x8000) && (waitCounter < 100)) {
+ ATTEMPT(mtdWait(1));
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_T_UNIT_AN, MTD_SERDES_CTRL_STATUS, &temp3));
+ waitCounter++;
+ }
+
+ /* if speed changed, let it stay. that's the speed that it ended up changing to/serdes was initialied to */
+ if (waitCounter >= 100) {
+ return MTD_FAIL; /* execute timed out */
+ }
+
+ return MTD_OK;
+}
+
+
+/* mtdHunit.c */
+/******************************************************************************
+ Mac Interface functions
+******************************************************************************/
+
+MTD_STATUS mtdSetMacInterfaceControl(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 macType,
+ IN MTD_BOOL macIfPowerDown,
+ IN MTD_U16 macIfSnoopSel,
+ IN MTD_U16 macIfActiveLaneSelect,
+ IN MTD_U16 macLinkDownSpeed,
+ IN MTD_U16 macMaxIfSpeed, /* 33X0/E20X0 devices only */
+ IN MTD_BOOL doSwReset,
+ IN MTD_BOOL rerunSerdesInitialization)
+{
+ MTD_U16 cunitPortCtrl, cunitModeConfig;
+
+ /* do range checking on parameters */
+ if ((macType > MTD_MAC_LEAVE_UNCHANGED)) {
+ return MTD_FAIL;
+ }
+
+ if ((macIfSnoopSel > MTD_MAC_SNOOP_LEAVE_UNCHANGED) ||
+ (macIfSnoopSel == 1)) {
+ return MTD_FAIL;
+ }
+
+ if (macIfActiveLaneSelect > 1) {
+ return MTD_FAIL;
+ }
+
+ if (macLinkDownSpeed > MTD_MAC_SPEED_LEAVE_UNCHANGED) {
+ return MTD_FAIL;
+ }
+
+ if (!(macMaxIfSpeed == MTD_MAX_MAC_SPEED_10G ||
+ macMaxIfSpeed == MTD_MAX_MAC_SPEED_5G ||
+ macMaxIfSpeed == MTD_MAX_MAC_SPEED_2P5G ||
+ macMaxIfSpeed == MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED ||
+ macMaxIfSpeed == MTD_MAX_MAC_SPEED_NOT_APPLICABLE)) {
+ return MTD_FAIL;
+ }
+
+
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, &cunitPortCtrl));
+ ATTEMPT(mtdHwXmdioRead(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_MODE_CONFIG, &cunitModeConfig));
+
+ /* Because writes of some of these bits don't show up in the register on a read
+ * until after the software reset, we can't do repeated read-modify-writes
+ * to the same register or we will lose those changes.
+
+ * This approach also cuts down on IO and speeds up the code
+ */
+
+ if (macType < MTD_MAC_LEAVE_UNCHANGED) {
+ ATTEMPT(mtdHwSetRegFieldToWord(cunitPortCtrl, macType, 0, 3, &cunitPortCtrl));
+ }
+
+ ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, (MTD_U16)macIfPowerDown, 3, 1, &cunitModeConfig));
+
+ if (macIfSnoopSel < MTD_MAC_SNOOP_LEAVE_UNCHANGED) {
+ ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macIfSnoopSel, 8, 2, &cunitModeConfig));
+ }
+
+ ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macIfActiveLaneSelect, 10, 1, &cunitModeConfig));
+
+ if (macLinkDownSpeed < MTD_MAC_SPEED_LEAVE_UNCHANGED) {
+ ATTEMPT(mtdHwSetRegFieldToWord(cunitModeConfig, macLinkDownSpeed, 6, 2, &cunitModeConfig));
+ }
+
+ /* Now write changed values */
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_PORT_CTRL, cunitPortCtrl));
+ ATTEMPT(mtdHwXmdioWrite(devPtr, port, MTD_C_UNIT_GENERAL, MTD_CUNIT_MODE_CONFIG, cunitModeConfig));
+
+ if (MTD_IS_X33X0_BASE(devPtr->deviceId)) {
+ if (macMaxIfSpeed != MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED) {
+ ATTEMPT(mtdHwSetPhyRegField(devPtr, port, 31, 0xF0A8, 0, 2, macMaxIfSpeed));
+ }
+ }
+
+ if (doSwReset == MTD_TRUE) {
+ ATTEMPT(mtdCunitSwReset(devPtr, port));
+
+ if (macLinkDownSpeed < MTD_MAC_SPEED_LEAVE_UNCHANGED) {
+ ATTEMPT(mtdCunitSwReset(devPtr, port)); /* need 2x for changes to macLinkDownSpeed */
+ }
+
+ if (rerunSerdesInitialization == MTD_TRUE) {
+ ATTEMPT(mtdRerunSerdesAutoInitializationUseAutoMode(devPtr, port));
+ }
+ }
+
+ return MTD_OK;
+}
+
+
+/*******************************************************************************
+* mtdSemCreate
+*
+* DESCRIPTION:
+* Create semaphore.
+*
+* INPUTS:
+* state - beginning state of the semaphore, either MTD_SEM_EMPTY or MTD_SEM_FULL
+*
+* OUTPUTS:
+* None
+*
+* RETURNS:
+* MTD_SEM if success. Otherwise, NULL
+*
+* COMMENTS:
+* None
+*
+*******************************************************************************/
+MTD_SEM mtdSemCreate(
+ IN MTD_DEV * dev,
+ IN MTD_SEM_BEGIN_STATE state)
+{
+ if (dev->semCreate)
+ return dev->semCreate(state);
+
+ return 1; /* should return any value other than 0 to let it keep going */
+}
+
+MTD_STATUS mtdLoadDriver(
+ IN FMTD_READ_MDIO readMdio,
+ IN FMTD_WRITE_MDIO writeMdio,
+ IN MTD_BOOL macsecIndirectAccess,
+ IN FMTD_SEM_CREATE semCreate,
+ IN FMTD_SEM_DELETE semDelete,
+ IN FMTD_SEM_TAKE semTake,
+ IN FMTD_SEM_GIVE semGive,
+ IN MTD_U16 anyPort,
+ OUT MTD_DEV * dev)
+{
+ MTD_U16 data;
+
+ MTD_DBG_INFO("mtdLoadDriver Called.\n");
+
+ /* Check for parameters validity */
+ if (dev == NULL) {
+ MTD_DBG_ERROR("MTD_DEV pointer is NULL.\n");
+ return MTD_API_ERR_DEV;
+ }
+
+ /* The initialization was already done. */
+ if (dev->devEnabled) {
+ MTD_DBG_ERROR("Device Driver already loaded.\n");
+ return MTD_API_ERR_DEV_ALREADY_EXIST;
+ }
+
+ /* Make sure mtdWait() was implemented */
+ if (mtdWait(1) == MTD_FAIL) {
+ MTD_DBG_ERROR("mtdWait() not implemented.\n");
+ return MTD_FAIL;
+ }
+
+ dev->fmtdReadMdio = readMdio;
+ dev->fmtdWriteMdio = writeMdio;
+
+ dev->semCreate = semCreate;
+ dev->semDelete = semDelete;
+ dev->semTake = semTake;
+ dev->semGive = semGive;
+ dev->macsecIndirectAccess = macsecIndirectAccess; /* 88X33X0 and later force direct access */
+
+ /* try to read 1.0 */
+ if ((mtdHwXmdioRead(dev, anyPort, 1, 0, &data)) != MTD_OK) {
+ MTD_DBG_ERROR("Reading to reg %x failed.\n", 0);
+ return MTD_API_FAIL_READ_REG;
+ }
+
+ MTD_DBG_INFO("mtdLoadDriver successful.\n");
+
+ /* Initialize the MACsec Register Access semaphore. */
+ dev->multiAddrSem = mtdSemCreate(dev, MTD_SEM_FULL);
+ if (dev->multiAddrSem == 0) {
+ MTD_DBG_ERROR("semCreate Failed.\n");
+ return MTD_API_FAIL_SEM_CREATE;
+ }
+
+ if (dev->msec_ctrl.msec_rev == MTD_MSEC_REV_FPGA) {
+ dev->deviceId = MTD_REV_3310P_Z2; /* verification: change if needed */
+ dev->numPorts = 1; /* verification: change if needed */
+ dev->thisPort = 0;
+ } else {
+ /* After everything else is done, can fill in the device id */
+ if ((mtdGetPhyRevision(dev, anyPort,
+ &(dev->deviceId),
+ &(dev->numPorts),
+ &(dev->thisPort))) != MTD_OK) {
+ MTD_DBG_ERROR("mtdGetPhyRevision Failed.\n");
+ return MTD_FAIL;
+ }
+ }
+
+ if (MTD_IS_X33X0_BASE(dev->deviceId)) {
+ dev->macsecIndirectAccess = MTD_FALSE; /* bug was fixed in 88X33X0 and later revisions, go direct */
+ }
+
+ dev->devEnabled = MTD_TRUE;
+
+ MTD_DBG_INFO("mtdLoadDriver successful !!!.\n");
+
+ return MTD_OK;
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
new file mode 100644
index 0000000000000..1c5daae94a547
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
@@ -0,0 +1,1540 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#ifndef _TXGBE_MTD_H_
+#define _TXGBE_MTD_H_
+
+#define C_LINKAGE 1 /* set to 1 if C compile/linkage on C files is desired with C++ */
+
+#if C_LINKAGE
+#if defined __cplusplus
+ extern "C" {
+#endif
+#endif
+
+/* general */
+
+#undef IN
+#define IN
+#undef OUT
+#define OUT
+#undef INOUT
+#define INOUT
+
+#ifndef NULL
+#define NULL ((void *)0)
+#endif
+
+typedef void MTD_VOID;
+typedef char MTD_8;
+typedef short MTD_16;
+typedef long MTD_32;
+typedef long long MTD_64;
+
+typedef unsigned char MTD_U8;
+typedef unsigned short MTD_U16;
+typedef unsigned long MTD_U32;
+typedef unsigned int MTD_UINT;
+typedef int MTD_INT;
+typedef signed short MTD_S16;
+
+typedef unsigned long long MTD_U64;
+
+typedef enum {
+ MTD_FALSE = 0,
+ MTD_TRUE = 1
+} MTD_BOOL;
+
+#define MTD_CONVERT_BOOL_TO_UINT(boolVar, uintVar) \
+ {(boolVar) ? (uintVar = 1) : (uintVar = 0); }
+#define MTD_CONVERT_UINT_TO_BOOL(uintVar, boolVar) \
+ {(uintVar) ? (boolVar = MTD_TRUE) : (boolVar = MTD_FALSE); }
+#define MTD_GET_BOOL_AS_BIT(boolVar) ((boolVar) ? 1 : 0)
+#define MTD_GET_BIT_AS_BOOL(uintVar) ((uintVar) ? MTD_TRUE : MTD_FALSE)
+
+typedef void (*MTD_VOIDFUNCPTR) (void); /* ptr to function returning void */
+typedef MTD_U32 (*MTD_INTFUNCPTR) (void); /* ptr to function returning int */
+
+typedef MTD_U32 MTD_STATUS;
+
+/* Defines for semaphore support */
+typedef MTD_U32 MTD_SEM;
+
+typedef enum {
+ MTD_SEM_EMPTY,
+ MTD_SEM_FULL
+} MTD_SEM_BEGIN_STATE;
+
+typedef MTD_SEM (*FMTD_SEM_CREATE)(MTD_SEM_BEGIN_STATE state);
+typedef MTD_STATUS (*FMTD_SEM_DELETE)(MTD_SEM semId);
+typedef MTD_STATUS (*FMTD_SEM_TAKE)(MTD_SEM semId, MTD_U32 timOut);
+typedef MTD_STATUS (*FMTD_SEM_GIVE)(MTD_SEM semId);
+
+/* Defines for mtdLoadDriver() mtdUnloadDriver() and all API functions which need MTD_DEV */
+typedef struct _MTD_DEV MTD_DEV;
+typedef MTD_DEV * MTD_DEV_PTR;
+
+typedef MTD_STATUS (*FMTD_READ_MDIO)(
+ MTD_DEV *dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 *value);
+typedef MTD_STATUS (*FMTD_WRITE_MDIO)(
+ MTD_DEV *dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 value);
+
+/* MTD_DEVICE_ID format: */
+/* Bits 15:13 reserved */
+/* Bit 12: 1-> E20X0 device with max speed of 5G and no fiber interface */
+/* Bit 11: 1-> Macsec Capable (Macsec/PTP module included */
+/* Bit 10: 1-> Copper Capable (T unit interface included) */
+/* Bits 9:4 0x18 -> X32X0 base, 0x1A 0x33X0 base */
+/* Bits 3:0 revision/number of ports indication, see list */
+/* Following defines are for building MTD_DEVICE_ID */
+#define MTD_E20X0_DEVICE (1<<12) /* whether this is an E20X0 device group */
+#define MTD_MACSEC_CAPABLE (1<<11) /* whether the device has a Macsec/PTP module */
+#define MTD_COPPER_CAPABLE (1<<10) /* whether the device has a copper (T unit) module */
+#define MTD_X32X0_BASE (0x18<<4) /* whether the device uses X32X0 firmware base */
+#define MTD_X33X0_BASE (0x1A<<4) /* whether the device uses X33X0 firmware base */
+
+/* Following macros are to test MTD_DEVICE_ID for various features */
+#define MTD_IS_E20X0_DEVICE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_E20X0_DEVICE))
+#define MTD_IS_MACSEC_CAPABLE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_MACSEC_CAPABLE))
+#define MTD_IS_COPPER_CAPABLE(mTdrevId) ((MTD_BOOL)(mTdrevId & MTD_COPPER_CAPABLE))
+#define MTD_IS_X32X0_BASE(mTdrevId) ((MTD_BOOL)((mTdrevId & (0x3F<<4)) == MTD_X32X0_BASE))
+#define MTD_IS_X33X0_BASE(mTdrevId) ((MTD_BOOL)((mTdrevId & (0x3F<<4)) == MTD_X33X0_BASE))
+
+#define MTD_X33X0BASE_SINGLE_PORTA0 0xA
+#define MTD_X33X0BASE_DUAL_PORTA0 0x6
+#define MTD_X33X0BASE_QUAD_PORTA0 0x2
+
+/* WARNING: If you add/modify this list, you must also modify mtdIsPhyRevisionValid() */
+typedef enum {
+ MTD_REV_UNKNOWN = 0,
+ MTD_REV_3240P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x1),
+ MTD_REV_3240P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x2),
+ MTD_REV_3240P_A1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x3),
+ MTD_REV_3220P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x4),
+ MTD_REV_3220P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x5),
+ MTD_REV_3240_Z2 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x1),
+ MTD_REV_3240_A0 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x2),
+ MTD_REV_3240_A1 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x3),
+ MTD_REV_3220_Z2 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x4),
+ MTD_REV_3220_A0 = (MTD_COPPER_CAPABLE | MTD_X32X0_BASE | 0x5),
+
+ MTD_REV_3310P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x8), /* 88X33X0 Z1 not supported starting with version 1.2 of API */
+ MTD_REV_3320P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x4),
+ MTD_REV_3340P_Z1 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x0),
+ MTD_REV_3310_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x8),
+ MTD_REV_3320_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x4),
+ MTD_REV_3340_Z1 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x0),
+
+ MTD_REV_3310P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), /* 88X33X0 Z2 not supported starting with version 1.2 of API */
+ MTD_REV_3320P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5),
+ MTD_REV_3340P_Z2 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1),
+ MTD_REV_3310_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9),
+ MTD_REV_3320_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5),
+ MTD_REV_3340_Z2 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1),
+
+ MTD_REV_E2010P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9), /* E20X0 Z2 not supported starting with version 1.2 of API */
+ MTD_REV_E2020P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5),
+ MTD_REV_E2040P_Z2 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1),
+ MTD_REV_E2010_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x9),
+ MTD_REV_E2020_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x5),
+ MTD_REV_E2040_Z2 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | 0x1),
+
+
+ MTD_REV_3310P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0),
+ MTD_REV_3320P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0),
+ MTD_REV_3340P_A0 = (MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0),
+ MTD_REV_3310_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0),
+ MTD_REV_3320_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0),
+ MTD_REV_3340_A0 = (MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0),
+
+ MTD_REV_E2010P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0),
+ MTD_REV_E2020P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0),
+ MTD_REV_E2040P_A0 = (MTD_E20X0_DEVICE | MTD_MACSEC_CAPABLE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0),
+ MTD_REV_E2010_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_SINGLE_PORTA0),
+ MTD_REV_E2020_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_DUAL_PORTA0),
+ MTD_REV_E2040_A0 = (MTD_E20X0_DEVICE | MTD_COPPER_CAPABLE | MTD_X33X0_BASE | MTD_X33X0BASE_QUAD_PORTA0),
+
+ MTD_REV_2340P_A1 = (MTD_MACSEC_CAPABLE | MTD_X32X0_BASE | 0x3),
+ MTD_REV_2320P_A0 = (MTD_MACSEC_CAPABLE | MTD_X32X0_BASE | 0x5),
+ MTD_REV_2340_A1 = (MTD_X32X0_BASE | 0x3),
+ MTD_REV_2320_A0 = (MTD_X32X0_BASE | 0x5)
+} MTD_DEVICE_ID;
+
+typedef enum {
+ MTD_MSEC_REV_Z0A,
+ MTD_MSEC_REV_Y0A,
+ MTD_MSEC_REV_A0B,
+ MTD_MSEC_REV_FPGA,
+ MTD_MSEC_REV_UNKNOWN = -1
+} MTD_MSEC_REV;
+
+/* compatible for USB test */
+typedef struct _MTD_MSEC_CTRL {
+ MTD_32 dev_num; /* indicates the device number (0 if only one) when multiple devices are present on SVB.*/
+ MTD_32 port_num; /* Indicates which port (0 to 4) is requesting CPU */
+ MTD_U16 prev_addr; /* < Prev write address */
+ MTD_U16 prev_dataL; /* < Prev dataL value */
+ MTD_MSEC_REV msec_rev; /* revision */
+} MTD_MSEC_CTRL;
+
+struct _MTD_DEV {
+ MTD_DEVICE_ID deviceId; /* type of device and capabilities */
+ MTD_BOOL devEnabled; /* whether mtdLoadDriver() called successfully */
+ MTD_U8 numPorts; /* number of ports per device */
+ MTD_U8 thisPort; /* relative port number on this device starting with 0 (not MDIO address) */
+ MTD_SEM multiAddrSem;
+
+ FMTD_READ_MDIO fmtdReadMdio;
+ FMTD_WRITE_MDIO fmtdWriteMdio;
+
+ FMTD_SEM_CREATE semCreate; /* create semapore */
+ FMTD_SEM_DELETE semDelete; /* delete the semapore */
+ FMTD_SEM_TAKE semTake; /* try to get a semapore */
+ FMTD_SEM_GIVE semGive; /* return semaphore */
+
+ MTD_U8 macsecIndirectAccess; /* if MTD_TRUE use internal processor to access Macsec */
+ MTD_MSEC_CTRL msec_ctrl; /* structure use for internal verification */
+
+ void *appData; /* application specific data, anything the host wants to pass to the low layer */
+};
+
+#define MTD_OK 0 /* Operation succeeded */
+#define MTD_FAIL 1 /* Operation failed */
+#define MTD_PENDING 2 /* Pending */
+
+/* bit definition */
+#define MTD_BIT_0 0x0001
+#define MTD_BIT_1 0x0002
+#define MTD_BIT_2 0x0004
+#define MTD_BIT_3 0x0008
+#define MTD_BIT_4 0x0010
+#define MTD_BIT_5 0x0020
+#define MTD_BIT_6 0x0040
+#define MTD_BIT_7 0x0080
+#define MTD_BIT_8 0x0100
+#define MTD_BIT_9 0x0200
+#define MTD_BIT_10 0x0400
+#define MTD_BIT_11 0x0800
+#define MTD_BIT_12 0x1000
+#define MTD_BIT_13 0x2000
+#define MTD_BIT_14 0x4000
+#define MTD_BIT_15 0x8000
+
+#define MTD_DBG_ERROR(...)
+#define MTD_DBG_INFO(...)
+#define MTD_DBG_CRITIC_INFO(...)
+
+
+#define MTD_API_MAJOR_VERSION 2
+#define MTD_API_MINOR_VERSION 0
+
+/* This macro is handy for calling a function when you want to test the
+ return value and return MTD_FAIL, if the function returned MTD_FAIL,
+ otherwise continue */
+#define ATTEMPT(xFuncToTry) do {if (xFuncToTry == MTD_FAIL) { return MTD_FAIL; } } while (0)
+
+/* These defines are used for some registers which represent the copper
+ speed as a 2-bit binary number */
+#define MTD_CU_SPEED_10_MBPS 0 /* copper is 10BASE-T */
+#define MTD_CU_SPEED_100_MBPS 1 /* copper is 100BASE-TX */
+#define MTD_CU_SPEED_1000_MBPS 2 /* copper is 1000BASE-T */
+#define MTD_CU_SPEED_10_GBPS 3 /* copper is 10GBASE-T */
+
+/* for 88X33X0 family: */
+#define MTD_CU_SPEED_NBT 3 /* copper is NBASE-T */
+#define MTD_CU_SPEED_NBT_10G 0 /* copper is 10GBASE-T */
+#define MTD_CU_SPEED_NBT_5G 2 /* copper is 5GBASE-T */
+#define MTD_CU_SPEED_NBT_2P5G 1 /* copper is 2.5GBASE-T */
+
+#define MTD_ADV_NONE 0x0000 /* No speeds to be advertised */
+#define MTD_SPEED_10M_HD 0x0001 /* 10BT half-duplex */
+#define MTD_SPEED_10M_FD 0x0002 /* 10BT full-duplex */
+#define MTD_SPEED_100M_HD 0x0004 /* 100BASE-TX half-duplex */
+#define MTD_SPEED_100M_FD 0x0008 /* 100BASE-TX full-duplex */
+#define MTD_SPEED_1GIG_HD 0x0010 /* 1000BASE-T half-duplex */
+#define MTD_SPEED_1GIG_FD 0x0020 /* 1000BASE-T full-duplex */
+#define MTD_SPEED_10GIG_FD 0x0040 /* 10GBASE-T full-duplex */
+#define MTD_SPEED_2P5GIG_FD 0x0800 /* 2.5GBASE-T full-duplex, 88X33X0/88E20X0 family only */
+#define MTD_SPEED_5GIG_FD 0x1000 /* 5GBASE-T full-duplex, 88X33X0/88E20X0 family only */
+#define MTD_SPEED_ALL (MTD_SPEED_10M_HD | \
+ MTD_SPEED_10M_FD | \
+ MTD_SPEED_100M_HD | \
+ MTD_SPEED_100M_FD | \
+ MTD_SPEED_1GIG_HD | \
+ MTD_SPEED_1GIG_FD | \
+ MTD_SPEED_10GIG_FD)
+#define MTD_SPEED_ALL_33X0 (MTD_SPEED_10M_HD | \
+ MTD_SPEED_10M_FD | \
+ MTD_SPEED_100M_HD | \
+ MTD_SPEED_100M_FD | \
+ MTD_SPEED_1GIG_HD | \
+ MTD_SPEED_1GIG_FD | \
+ MTD_SPEED_10GIG_FD | \
+ MTD_SPEED_2P5GIG_FD |\
+ MTD_SPEED_5GIG_FD)
+
+/* these bits are for forcing the speed and disabling autonegotiation */
+#define MTD_SPEED_10M_HD_AN_DIS 0x0080 /* Speed forced to 10BT half-duplex */
+#define MTD_SPEED_10M_FD_AN_DIS 0x0100 /* Speed forced to 10BT full-duplex */
+#define MTD_SPEED_100M_HD_AN_DIS 0x0200 /* Speed forced to 100BT half-duplex */
+#define MTD_SPEED_100M_FD_AN_DIS 0x0400 /* Speed forced to 100BT full-duplex */
+
+/* this value is returned for the speed when the link status is checked and the speed has been */
+/* forced to one speed but the link is up at a different speed. it indicates an error. */
+#define MTD_SPEED_MISMATCH 0x8000 /* Speed is forced to one speed, but status indicates another */
+
+
+/* for macType */
+#define MTD_MAC_TYPE_RXAUI_SGMII_AN_EN (0x0) /* X32X0/X33x0, but not E20x0 */
+#define MTD_MAC_TYPE_RXAUI_SGMII_AN_DIS (0x1) /* X32x0/X3340/X3320, but not X3310/E20x0 */
+#define MTD_MAC_TYPE_XAUI_RATE_ADAPT (0x1) /* X3310,E2010 only */
+#define MTD_MAC_TYPE_RXAUI_RATE_ADAPT (0x2)
+#define MTD_MAC_TYPE_XAUI (0x3) /* X3310,E2010 only */
+#define MTD_MAC_TYPE_XFI_SGMII_AN_EN (0x4) /* XFI at 10G, X33x0/E20x0 also use 5GBASE-R/2500BASE-X */
+#define MTD_MAC_TYPE_XFI_SGMII_AN_DIS (0x5) /* XFI at 10G, X33x0/E20x0 also use 5GBASE-R/2500BASE-X */
+#define MTD_MAC_TYPE_XFI_RATE_ADAPT (0x6)
+#define MTD_MAC_TYPE_USXGMII (0x7) /* X33x0 only */
+#define MTD_MAC_LEAVE_UNCHANGED (0x8) /* use this option to not touch these bits */
+
+/* for macIfSnoopSel */
+#define MTD_MAC_SNOOP_FROM_NETWORK (0x2)
+#define MTD_MAC_SNOOP_FROM_HOST (0x3)
+#define MTD_MAC_SNOOP_OFF (0x0)
+#define MTD_MAC_SNOOP_LEAVE_UNCHANGED (0x4) /* use this option to not touch these bits */
+/* for macLinkDownSpeed */
+#define MTD_MAC_SPEED_10_MBPS MTD_CU_SPEED_10_MBPS
+#define MTD_MAC_SPEED_100_MBPS MTD_CU_SPEED_100_MBPS
+#define MTD_MAC_SPEED_1000_MBPS MTD_CU_SPEED_1000_MBPS
+#define MTD_MAC_SPEED_10_GBPS MTD_CU_SPEED_10_GBPS
+#define MTD_MAC_SPEED_LEAVE_UNCHANGED (0x4)
+/* X33X0/E20X0 devices only for macMaxIfSpeed */
+#define MTD_MAX_MAC_SPEED_10G (0)
+#define MTD_MAX_MAC_SPEED_5G (2)
+#define MTD_MAX_MAC_SPEED_2P5G (3)
+#define MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED (4)
+#define MTD_MAX_MAC_SPEED_NOT_APPLICABLE (4) /* 32X0 devices can pass this */
+
+/* 88X3240/3220 Device Number Definitions */
+#define MTD_T_UNIT_PMA_PMD 1
+#define MTD_T_UNIT_PCS_CU 3
+#define MTD_X_UNIT 3
+#define MTD_H_UNIT 4
+#define MTD_T_UNIT_AN 7
+#define MTD_XFI_DSP 30
+#define MTD_C_UNIT_GENERAL 31
+#define MTD_M_UNIT 31
+
+/* 88X3240/3220 Device Number Definitions Host Redundant Mode */
+#define MTD_BASER_LANE_0 MTD_H_UNIT
+#define MTD_BASER_LANE_1 MTD_X_UNIT
+
+/* 88X3240/3220 T Unit Registers MMD 1 */
+#define MTD_TUNIT_IEEE_PMA_CTRL1 0x0000 /* do not enclose in parentheses */
+#define MTD_TUNIT_XG_EXT_STATUS 0xC001 /* do not enclose in parentheses */
+#define MTD_TUNIT_PHY_REV_INFO_REG 0xC04E /* do not enclose in parentheses */
+
+/* control/status for serdes initialization */
+#define MTD_SERDES_CTRL_STATUS 0x800F /* do not enclose in parentheses */
+/* 88X3240/3220 C Unit Registers MMD 31 */
+#define MTD_CUNIT_MODE_CONFIG 0xF000 /* do not enclose in parentheses */
+#define MTD_CUNIT_PORT_CTRL 0xF001 /* do not enclose in parentheses */
+
+#define MTD_API_FAIL_SEM_CREATE (0x18<<24) /*semCreate Failed. */
+#define MTD_API_FAIL_SEM_DELETE (0x19<<24) /*semDelete Failed. */
+#define MTD_API_FAIL_READ_REG (0x16<<16) /*Reading from phy reg failed. */
+#define MTD_API_ERR_DEV (0x3c<<16) /*driver struture is NULL. */
+#define MTD_API_ERR_DEV_ALREADY_EXIST (0x3e<<16) /*Device Driver already loaded. */
+
+
+#define MTD_CLEAR_PAUSE 0 /* clears both pause bits */
+#define MTD_SYM_PAUSE 1 /* for symmetric pause only */
+#define MTD_ASYM_PAUSE 2 /* for asymmetric pause only */
+#define MTD_SYM_ASYM_PAUSE 3 /* for both */
+
+
+/*******************************************************************************
+ mtdLoadDriver
+
+ DESCRIPTION:
+ Marvell X32X0 Driver Initialization Routine.
+ This is the first routine that needs be called by system software.
+ It takes parameters from system software, and retures a pointer (*dev)
+ to a data structure which includes infomation related to this Marvell Phy
+ device. This pointer (*dev) is then used for all the API functions.
+ The following is the job performed by this routine:
+ 1. store MDIO read/write function into the given MTD_DEV structure
+ 2. run any device specific initialization routine
+ 3. create semaphore if required
+ 4. Initialize the deviceId
+
+
+ INPUTS:
+ readMdio - pointer to host's function to do MDIO read
+ writeMdio - point to host's function to do MDIO write
+ macsecIndirectAccess - MTD_TRUE to access MacSec through T-unit processor
+ MTD_FALSE to do direct register access
+ semCreate - pointer to host's function to create a semaphore, NULL
+ if not used
+ semDelete - pointer to host's function to create a semaphore, NULL
+ if not used
+ semTake - pointer to host's function to take a semaphore, NULL
+ if not used
+ semGive - pointer to host's function to give a semaphore, NULL
+ if not used
+ anyPort - port address of any port for this device
+
+ OUTPUTS:
+ dev - pointer to holds device information to be used for each API call.
+
+ RETURNS:
+ MTD_OK - on success
+ MTD_FAIL - on error
+
+ COMMENTS:
+ mtdUnloadDriver is also provided to do driver cleanup.
+
+ An MTD_DEV is required for each type of X32X0 device in the system. For
+ example, if there are 16 ports of X3240 and 4 ports of X3220,
+ two MTD_DEV are required, and one call to mtdLoadDriver() must
+ be made with one of the X3240 ports, and one with one of the X3220
+ ports.
+*******************************************************************************/
+MTD_STATUS mtdLoadDriver
+(
+ IN FMTD_READ_MDIO readMdio,
+ IN FMTD_WRITE_MDIO writeMdio,
+ IN MTD_BOOL macsecIndirectAccess,
+ IN FMTD_SEM_CREATE semCreate,
+ IN FMTD_SEM_DELETE semDelete,
+ IN FMTD_SEM_TAKE semTake,
+ IN FMTD_SEM_GIVE semGive,
+ IN MTD_U16 anyPort,
+ OUT MTD_DEV * dev
+);
+
+/******************************************************************************
+MTD_STATUS mtdHwXmdioWrite
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ IN MTD_U16 value
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ dev - MMD device address, 0-31
+ reg - MMD register address
+ value - data to write
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK - wrote successfully
+ MTD_FAIL - an error occurred
+
+ Description:
+ Writes a 16-bit word to the MDIO
+ Address is in format X.Y.Z, where X selects the MDIO port (0-31), Y selects
+ the MMD/Device (0-31), and Z selects the register.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ None
+
+******************************************************************************/
+MTD_STATUS mtdHwXmdioWrite
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ IN MTD_U16 value
+);
+
+/******************************************************************************
+ MTD_STATUS mtdHwXmdioRead
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ OUT MTD_U16 *data
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ dev - MMD device address, 0-31
+ reg - MMD register address
+
+ Outputs:
+ data - Returns 16 bit word from the MDIO
+
+ Returns:
+ MTD_OK - read successful
+ MTD_FAIL - read was unsuccessful
+
+ Description:
+ Reads a 16-bit word from the MDIO
+ Address is in format X.Y.Z, where X selects the MDIO port (0-31), Y selects the
+ MMD/Device (0-31), and Z selects the register.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ None
+
+******************************************************************************/
+MTD_STATUS mtdHwXmdioRead
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 reg,
+ OUT MTD_U16 *data
+);
+
+
+/*******************************************************************************
+ MTD_STATUS mtdHwGetPhyRegField
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - The port number, 0-31
+ dev - The MMD device, 0-31
+ regAddr - The register's address
+ fieldOffset - The field start bit index. (0 - 15)
+ fieldLength - Number of bits to read
+
+ Outputs:
+ data - The read register field
+
+ Returns:
+ MTD_OK on success, or
+ MTD_FAIL - on error
+
+ Description:
+ This function reads a specified field from a port's phy register.
+ It first reads the register, then returns the specified bit
+ field from what was read.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The sum of fieldOffset & fieldLength parameters must be smaller-
+ equal to 16
+
+ Reading a register with latched bits may clear the latched bits.
+ Use with caution for registers with latched bits.
+
+ To operate on several bits within a register which has latched bits
+ before reading the register again, first read the register with
+ mtdHwXmdioRead() to get the register value, then operate on the
+ register data repeatedly using mtdHwGetRegFieldFromWord() to
+ take apart the bit fields without re-reading the register again.
+
+ This approach should also be used to reduce IO to the PHY when reading
+ multiple bit fields (do a single read, then grab different fields
+ from the register by using mtdHwGetRegFieldFromWord() repeatedly).
+
+*******************************************************************************/
+MTD_STATUS mtdHwGetPhyRegField
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+);
+
+/*******************************************************************************
+ MTD_STATUS mtdHwSetPhyRegField
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ IN MTD_U16 data
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - The port number, 0-31
+ dev - The MMD device, 0-31
+ regAddr - The register's address
+ fieldOffset - The field start bit index. (0 - 15)
+ fieldLength - Number of bits to write
+ data - Data to be written.
+
+ Outputs:
+ None.
+
+ Returns:
+ MTD_OK on success, or
+ MTD_FAIL - on error
+
+ Description:
+ This function writes to specified field in a port's phy register.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The sum of fieldOffset & fieldLength parameters must be smaller-
+ equal to 16.
+
+*******************************************************************************/
+MTD_STATUS mtdHwSetPhyRegField
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 dev,
+ IN MTD_U16 regAddr,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ IN MTD_U16 data
+);
+
+/*******************************************************************************
+ MTD_STATUS mtdHwGetRegFieldFromWord
+ (
+ IN MTD_U16 regData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+ );
+
+ Inputs:
+ regData - The data previously read from the register
+ fieldOffset - The field start bit index. (0 - 15)
+ fieldLength - Number of bits to read
+
+ Outputs:
+ data - The data from the associated bit field
+
+ Returns:
+ MTD_OK always
+
+ Description:
+ This function grabs a value from a bitfield within a word. It could
+ be used to get the value of a bitfield within a word which was previously
+ read from the PHY.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The sum of fieldOffset & fieldLength parameters must be smaller-
+ equal to 16
+
+ This register acts on data passed in. It does no hardware access.
+
+ This function is useful if you want to do 1 register access and then
+ get different bit fields without doing another register access either
+ because there are latched bits in the register to avoid another read,
+ or to keep hardware IO down to improve performance/throughput.
+
+ Example:
+
+ MTD_U16 aword, nibble1, nibble2;
+
+ mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0
+ mtdHwGetRegFieldFromWord(aword,0,4,&nibble1); // grab first nibble
+ mtdHwGetRegFieldFromWord(aword,4,4,&nibble2); // grab second nibble
+
+*******************************************************************************/
+MTD_STATUS mtdHwGetRegFieldFromWord
+(
+ IN MTD_U16 regData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+);
+
+/*******************************************************************************
+ MTD_STATUS mtdHwSetRegFieldToWord
+ (
+ IN MTD_U16 regData,
+ IN MTD_U16 bitFieldData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+ );
+
+ Inputs:
+ regData - original word to modify
+ bitFieldData - The data to set the register field to
+ (must be <= largest value for that bit field,
+ no range checking is done by this function)
+ fieldOffset - The field start bit index. (0 - 15)
+ fieldLength - Number of bits to write to regData
+
+ Outputs:
+ This function grabs a value from a bitfield within a word. It could
+ be used to get the value of a bitfield within a word which was previously
+ read from the PHY.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The sum of fieldOffset & fieldLength parameters must be smaller-
+ equal to 16
+
+ This register acts on data passed in. It does no hardware access.
+
+ This function is useful if you want to do 1 register access and then
+ get different bit fields without doing another register access either
+ because there are latched bits in the register to avoid another read,
+ or to keep hardware IO down to improve performance/throughput.
+
+ Example:
+
+ MTD_U16 aword, nibble1, nibble2;
+
+ mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0
+ mtdHwGetRegFieldFromWord(aword,0,4,&nibble1); // grab first nibble
+ mtdHwGetRegFieldFromWord(aword,4,4,&nibble2); // grab second nibble
+
+*******************************************************************************/
+MTD_STATUS mtdHwGetRegFieldFromWord
+(
+ IN MTD_U16 regData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+);
+
+/*******************************************************************************
+ MTD_STATUS mtdHwSetRegFieldToWord
+ (
+ IN MTD_U16 regData,
+ IN MTD_U16 bitFieldData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+ );
+
+ Inputs:
+ regData - original word to modify
+ bitFieldData - The data to set the register field to
+ (must be <= largest value for that bit field,
+ no range checking is done by this function)
+ fieldOffset - The field start bit index. (0 - 15)
+ fieldLength - Number of bits to write to regData
+
+ Outputs:
+ data - The new/modified regData with the bitfield changed
+
+ Returns:
+ MTD_OK always
+
+ Description:
+ This function write a value to a bitfield within a word.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The sum of fieldOffset & fieldLength parameters must be smaller-
+ equal to 16
+
+ This register acts on data passed in. It does no hardware access.
+
+ This function is useful to reduce IO if several bit fields of a register
+ that has been read is to be changed before writing it back.
+
+ MTD_U16 aword;
+
+ mtdHwXmdioRead(devPtr,0,MTD_TUNIT_IEEE_PCS_CTRL1,&aword); // Read 3.0 from port 0
+ mtdHwSetRegFieldToWord(aword,2,0,4,&aword); // Change first nibble to 2
+ mtdHwSetRegFieldToWord(aword,3,4,4,&aword); // Change second nibble to 3
+
+*******************************************************************************/
+MTD_STATUS mtdHwSetRegFieldToWord
+(
+ IN MTD_U16 regData,
+ IN MTD_U16 bitFieldData,
+ IN MTD_U8 fieldOffset,
+ IN MTD_U8 fieldLength,
+ OUT MTD_U16 *data
+);
+
+
+/******************************************************************************
+MTD_STATUS mtdWait
+(
+ IN MTD_DEV_PTR devPtr,
+ IN unsigned x
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ x - number of milliseconds to wait
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK if wait was successful, MTD_FAIL otherwise
+
+ Description:
+ Waits X milliseconds
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ None
+
+******************************************************************************/
+MTD_STATUS mtdWait
+(
+ IN MTD_UINT x
+);
+
+/******************************************************************************
+MTD_STATUS mtdSoftwareReset
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 timeoutMs
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ timeoutMs - 0 will not wait for reset to complete, otherwise
+ waits 'timeout' milliseconds for reset to complete
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK or MTD_FAIL if IO error or timed out
+
+ Description:
+ Issues a software reset (1.0.15 <= 1) command. Resets firmware and
+ hardware state machines and returns non-retain bits to their hardware
+ reset values and retain bits keep their values through the reset.
+
+ If timeoutMs is 0, returns immediately. If timeoutMs is non-zero,
+ waits up to 'timeoutMs' milliseconds looking for the reset to complete
+ before returning. Returns MTD_FAIL if times out.
+
+ Side effects:
+ All "retain" bits keep their values through this reset. Non-"retain"-type
+ bits are returned to their hardware reset values following this reset.
+ See the Datasheet for a list of retain bits.
+
+ Notes/Warnings:
+ Use mtdIsPhyReadyAfterReset() to see if the software reset is complete
+ before issuing any other MDIO commands following this reset or pass
+ in non-zero timeoutMs to have this function do it for you.
+
+ This is a T unit software reset only. It may only be issued if the T
+ unit is ready (1.0.15 is 0) and the T unit is not in low power mode.
+
+******************************************************************************/
+MTD_STATUS mtdSoftwareReset
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 timeoutMs
+);
+
+MTD_STATUS mtdHardwareReset
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 timeoutMs
+);
+
+/******************************************************************************
+ MTD_STATUS mtdSetMacInterfaceControl
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 macType,
+ IN MTD_BOOL macIfPowerDown,
+ IN MTD_U16 macIfSnoopSel,
+ IN MTD_U16 macIfActiveLaneSelect,
+ IN MTD_U16 macLinkDownSpeed,
+ IN MTD_U16 macMaxIfSpeed, - 33X0/E20X0 devices only -
+ IN MTD_BOOL doSwReset,
+ IN MTD_BOOL rerunSerdesInitialization
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - port number, 0-31
+ macType - the type of MAC interface being used (the hardware interface). One of the following:
+ MTD_MAC_TYPE_RXAUI_SGMII_AN_EN - selects RXAUI with SGMII AN enabled
+ MTD_MAC_TYPE_RXAUI_SGMII_AN_DIS - selects RXAUI with SGMII AN disabled (not valid on X3310)
+ MTD_MAC_TYPE_XAUI_RATE_ADAPT - selects XAUI with rate matching (only valid on X3310)
+ MTD_MAC_TYPE_RXAUI_RATE_ADAPT - selects RXAUI with rate matching
+ MTD_MAC_TYPE_XAUI - selects XAUI (only valid on X3310)
+ MTD_MAC_TYPE_XFI_SGMII_AN_EN - selects XFI with SGMII AN enabled
+ MTD_MAC_TYPE_XFI_SGMII_AN_DIS - selects XFI with SGMII AN disabled
+ MTD_MAC_TYPE_XFI_RATE_ADAPT - selects XFI with rate matching
+ MTD_MAC_TYPE_USXGMII - selects USXGMII
+ MTD_MAC_LEAVE_UNCHANGED - option to leave this parameter unchanged/as it is
+ macIfPowerDown - MTD_TRUE if the host interface is always to be powered up
+ MTD_FALSE if the host interface can be powered down under
+ certain circumstances (see datasheet)
+ macIfSnoopSel - If snooping is requested on the other lane, selects the source
+ MTD_MAC_SNOOP_FROM_NETWORK - source of snooped data is to come from the network
+ MTD_MAC_SNOOP_FROM_HOST - source of snooped data is to come from the host
+ MTD_MAC_SNOOP_OFF - snooping is to be turned off
+ MTD_MAC_SNOOP_LEAVE_UNCHANGED - option to leave this parameter unchanged/as it is
+ macIfActiveLaneSelect - For redundant host mode, this selects the active lane. 0 or 1
+ only. 0 selects 0 as the active lane and 1 as the standby. 1 selects the other way.
+ macLinkDownSpeed - The speed the mac interface should run when the media side is
+ link down. One of the following:
+ MTD_MAC_SPEED_10_MBPS
+ MTD_MAC_SPEED_100_MBPS
+ MTD_MAC_SPEED_1000_MBPS
+ MTD_MAC_SPEED_10_GBPS
+ MTD_MAC_SPEED_LEAVE_UNCHANGED
+ macMaxIfSpeed - For X33X0/E20X0 devices only. Can be used to limit the Mac interface speed
+ MTD_MAX_MAC_SPEED_10G
+ MTD_MAX_MAC_SPEED_5G
+ MTD_MAX_MAC_SPEED_2P5G
+ MTD_MAX_MAC_SPEED_LEAVE_UNCHANGED
+ MTD_MAX_MAC_SPEED_NOT_APPLICABLE (for 32X0 devices pass this)
+ doSwReset - MTD_TRUE if a software reset (31.F001.15) should be done after these changes
+ have been made, or MTD_FALSE otherwise. See note below.
+ rerunSerdesInitialization - MTD_TRUE if any parameter that is likely to change the speed
+ of the serdes interface was performed like macLinkDownSpeed or macType will attempt
+ to reset the H unit serdes (this needs to be done AFTER the soft reset, so if doSwReset
+ is passed as MTD_FALSE, host must later call
+ mtdRerunSerdesAutoInitializationUseAutoMode() eventually to re-init the serdes).
+
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK or MTD_FAIL if a bad parameter was passed, or an IO error occurs.
+
+ Description:
+ Changes the above parameters as indicated in 31.F000 and 31.F001 and
+ optionally does a software reset afterwards for those bits which require a
+ software reset to take effect.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ These bits are actually in the C unit, but pertain to the host interface
+ control so the API called was placed here.
+
+ Changes to the MAC type (31.F001.2:0) do not take effect until a software
+ reset is performed on the port.
+
+ Changes to macLinkDownSpeed (31.F001.7:6) require 2 software resets to
+ take effect. This function will do 2 resets if doSwReset is MTD_TRUE
+ and macLinkDownSpeed is being changed.
+
+ IMPORTANT: the readback reads back the last written value following
+ a software reset. Writes followed by reads without an intervening
+ software reset will read back the old bit value for all those bits
+ requiring a software.
+
+ Because of this, read-modify-writes to different bitfields must have an
+ intervening software reset to pick up the latest value before doing
+ another read-modify-write to the register, otherwise the bitfield
+ may lose the value.
+
+ Suggest always setting doSwReset to MTD_TRUE to avoid problems of
+ possibly losing changes.
+
+******************************************************************************/
+MTD_STATUS mtdSetMacInterfaceControl
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 macType,
+ IN MTD_BOOL macIfPowerDown,
+ IN MTD_U16 macIfSnoopSel,
+ IN MTD_U16 macIfActiveLaneSelect,
+ IN MTD_U16 macLinkDownSpeed,
+ IN MTD_U16 macMaxIfSpeed, /* 33X0/E20X0 devices only */
+ IN MTD_BOOL doSwReset,
+ IN MTD_BOOL rerunSerdesInitialization
+);
+
+/******************************************************************************
+ MTD_STATUS mtdEnableSpeeds
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 speed_bits,
+ IN MTD_BOOL anRestart
+ );
+
+ Inputs: 2
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ speed_bits - speeds to be advertised during auto-negotiation. One or more
+ of the following (bits logically OR together):
+ MTD_ADV_NONE (no bits set)
+ MTD_SPEED_10M_HD
+ MTD_SPEED_10M_FD
+ MTD_SPEED_100M_HD
+ MTD_SPEED_100M_FD
+ MTD_SPEED_1GIG_HD
+ MTD_SPEED_1GIG_FD
+ MTD_SPEED_10GIG_FD
+ MTD_SPEED_2P5GIG_FD (88X33X0/88E20X0 family only)
+ MTD_SPEED_5GIG_FD (88X33X0/88E20X0 family only)
+ MTD_SPEED_ALL
+ MTD_SPEED_ALL_33X0 (88X33X0/88E20X0 family only)
+
+ anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates
+ if auto-negotiation should be restarted following the speed
+ enable change. If this is MTD_FALSE, the change will not
+ take effect until AN is restarted in some other way (link
+ drop, toggle low power, toggle AN enable, toggle soft reset).
+
+ If this is MTD_TRUE and AN has been disabled, it will be
+ enabled before being restarted.
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK if action was successfully taken, MTD_FAIL if not. Also returns
+ MTD_FAIL if try to force the speed or try to advertise a speed not supported
+ on this PHY.
+
+ Description:
+ This function allows the user to select the speeds to be advertised to the
+ link partner during auto-negotiation.
+
+ First, this function enables auto-negotiation and XNPs by calling
+ mtdUndoForcedSpeed().
+
+ The function takes in a 16 bit value and sets the appropriate bits in MMD
+ 7 to have those speeds advertised.
+
+ The function also checks if the input parameter is MTD_ADV_NONE, in which case
+ all speeds are disabled effectively disabling the phy from training
+ (but not disabling auto-negotiation).
+
+ If anRestart is MTD_TRUE, an auto-negotiation restart is issued making the change
+ immediate. If anRestart is MTD_FALSE, the change will not take effect until the
+ next time auto-negotiation restarts.
+
+ Side effects:
+ Setting speed in 1.0 to 10GBASE-T has the effect of enabling XNPs in 7.0 and
+ enabling auto-negotiation in 7.0.
+
+ Notes/Warnings:
+
+ Example:
+ To train the highest speed matching the far end among
+ either 1000BASE-T Full-duplex or 10GBASE-T:
+ mtdEnableSpeeds(devPtr,port,MTD_SPEED_1GIG_FD | MTD_SPEED_10GIG_FD, MTD_TRUE);
+
+ To allow only 10GBASE-T to train:
+ mtdEnableSpeeds(devPtr,port,MTD_SPEED_10GIG_FD, MTD_TRUE);
+
+ To disable all speeds (but AN will still be running, just advertising no
+ speeds)
+ mtdEnableSpeeds(devPtr,port,MTD_ADV_NONE, MTD_TRUE);
+
+ This function is not to be used to disable autonegotiation and force the speed
+ to 10BASE-T or 100BASE-TX. Use mtdForceSpeed() for this.
+
+ 88X33X0 Z1/Z2 and E20X0 Z2 are not supported starting with API version 1.2.
+ Version 1.2 and later require A0 revision of these devices.
+
+******************************************************************************/
+MTD_STATUS mtdEnableSpeeds
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U16 speed_bits,
+ IN MTD_BOOL anRestart
+);
+
+MTD_STATUS mtdGetAutonegSpeedDuplexResolution
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U16 *speedResolution
+);
+
+MTD_STATUS mtdAutonegIsSpeedDuplexResolutionDone
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *anSpeedResolutionDone
+);
+
+/****************************************************************************/
+/*******************************************************************
+ Firmware Version
+ *******************************************************************/
+/****************************************************************************/
+
+/******************************************************************************
+MTD_STATUS mtdGetFirmwareVersion
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *major,
+ OUT MTD_U8 *minor,
+ OUT MTD_U8 *inc,
+ OUT MTD_U8 *test
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ major - major version, X.Y.Z.W, the X
+ minor - minor version, X.Y.Z.W, the Y
+ inc - incremental version, X.Y.Z.W, the Z
+ test - test version, X.Y.Z.W, the W, should be 0 for released code,
+ non-zero indicates this is a non-released code
+
+ Returns:
+ MTD_FAIL if version can't be queried or firmware is in download mode
+ (meaning all version numbers are 0), MTD_OK otherwise
+
+ Description:
+ This function reads the firmware version number and stores it in the
+ pointers passed in by the user.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ This function returns all 0's if the phy is in download mode. The phy
+ application code must have started and be ready before issuing this
+ command.
+
+******************************************************************************/
+MTD_STATUS mtdGetFirmwareVersion
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *major,
+ OUT MTD_U8 *minor,
+ OUT MTD_U8 *inc,
+ OUT MTD_U8 *test
+);
+
+/******************************************************************************
+MTD_STATUS mtdSetPauseAdvertisement
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U8 pauseType,
+ IN MTD_BOOL anRestart
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ pauseType - one of the following:
+ MTD_SYM_PAUSE,
+ MTD_ASYM_PAUSE,
+ MTD_SYM_ASYM_PAUSE or
+ MTD_CLEAR_PAUSE.
+ anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates
+ if auto-negotiation should be restarted following the speed
+ enable change. If this is MTD_FALSE, the change will not
+ take effect until AN is restarted in some other way (link
+ drop, toggle low power, toggle AN enable, toggle soft reset).
+
+ If this is MTD_TRUE and AN has been disabled, it will be
+ enabled before being restarted.
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK or MTD_FAIL, if action was successful or failed
+
+ Description:
+ This function sets the asymmetric and symmetric pause bits in the technology
+ ability field in the AN Advertisement register and optionally restarts
+ auto-negotiation to use the new values. This selects what type of pause
+ is to be advertised to the far end MAC during auto-negotiation. If
+ auto-negotiation is restarted, it is enabled first.
+
+ Sets entire 2-bit field to the value passed in pauseType.
+
+ To clear both bits, pass in MTD_CLEAR_PAUSE.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ This function will not take effect unless the auto-negotiation is restarted.
+
+******************************************************************************/
+MTD_STATUS mtdSetPauseAdvertisement
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_U32 pauseType,
+ IN MTD_BOOL anRestart
+);
+
+
+/******************************************************************************
+MTD_STATUS mtdGetLPAdvertisedPause
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *pauseBits
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ pauseBits - setting of link partner's pause bits based on bit definitions above in
+ mtdmtdSetPauseAdvertisement()
+
+ Returns:
+ MTD_OK or MTD_FAIL, based on whether the query succeeded or failed. Returns
+ MTD_FAIL and MTD_CLEAR_PAUSE if AN is not complete.
+
+ Description:
+ This function reads 7.19 (LP Base page ability) and returns the advertised
+ pause setting that was received from the link partner.
+
+ Side effects:
+ None
+
+ Notes/Warnings:
+ The user must make sure auto-negotiation has completed by calling
+ mtdAutonegIsCompleted() prior to calling this function.
+
+******************************************************************************/
+MTD_STATUS mtdGetLPAdvertisedPause
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_U8 *pauseBits
+);
+
+
+
+/******************************************************************************
+MTD_STATUS mtdGetPhyRevision
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_DEVICE_ID *phyRev,
+ OUT MTD_U8 *numPorts,
+ OUT MTD_U8 *thisPort
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ phyRev - revision of this chip, see MTD_DEVICE_ID definition for
+ a list of chip revisions with different options
+ numPorts - number of ports on this chip (see note below)
+ thisPort - this port number 0-1, or 0-4
+
+ Returns:
+ MTD_OK if query was successful, MTD_FAIL if not.
+
+ Will return MTD_FAIL on an unsupported PHY (but will attempt to
+ return correct version). See below for a list of unsupported PHYs.
+
+ Description:
+ Determines the PHY revision and returns the value in phyRev.
+ See definition of MTD_DEVICE_ID for a list of available
+ devices and capabilities.
+
+ Side effects:
+ None.
+
+ Notes/Warnings:
+ The phyRev can be used to determine number PHY revision,
+ number of ports, which port this is from PHY perspective
+ (0-based indexing 0...3 or 0..2) and what capabilities
+ the PHY has.
+
+ If phyRev is MTD_REV_UNKNOWN, numPorts and thisPort will be returned
+ as 0 and the function will return MTD_FAIL.
+
+ If T-unit is in download mode, thisPort will be returned as 0.
+
+ 88X33X0 Z1/Z2 is not supported starting with version 1.2 of API.
+ E20X0 Z2 is not supported starting with version 1.2 of API.
+
+******************************************************************************/
+MTD_STATUS mtdGetPhyRevision
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_DEVICE_ID *phyRev,
+ OUT MTD_U8 *numPorts,
+ OUT MTD_U8 *thisPort
+);
+
+
+
+/*****************************************************************************
+MTD_STATUS mtdGetForcedSpeed
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *speedIsForced,
+ OUT MTD_U16 *forcedSpeed
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ speedIsForced - MTD_TRUE if an is disabled (1.0.12 == 0) AND
+ the speed in 1.0.13/6 is set to 10BT or 100BT (speeds which do
+ not require an to train).
+ forcedSpeed - one of the following if speedIsForced is MTD_TRUE
+ MTD_SPEED_10M_HD_AN_DIS - speed forced to 10BT half-duplex
+ MTD_SPEED_10M_FD_AN_DIS - speed forced to 10BT full-duplex
+ MTD_SPEED_100M_HD_AN_DIS - speed forced to 100BT half-duplex
+ MTD_SPEED_100M_FD_AN_DIS - speed forced to 100BT full-duplex
+
+ Returns:
+ MTD_OK if the query was successful, or MTD_FAIL if not
+
+ Description:
+ Checks if AN is disabled (7.0.12=0) and if the speed select in
+ register 1.0.13 and 1.0.6 is set to either 10BT or 100BT speeds. If
+ all of this is true, returns MTD_TRUE in speedIsForced along with
+ the speed/duplex setting in forcedSpeedBits. If any of this is
+ false (AN is enabled, or the speed is set to 1000BT or 10GBT), then
+ speedIsForced is returned MTD_FALSE and the forcedSpeedBit value
+ is invalid.
+
+ Notes/Warnings:
+ None.
+
+******************************************************************************/
+MTD_STATUS mtdGetForcedSpeed
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ OUT MTD_BOOL *speedIsForced,
+ OUT MTD_U16 *forcedSpeed
+);
+
+
+/*****************************************************************************
+MTD_STATUS mtdUndoForcedSpeed
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_BOOL anRestart
+);
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+ anRestart - this takes the value of MTD_TRUE or MTD_FALSE and indicates
+ if auto-negotiation should be restarted following the speed
+ enable change. If this is MTD_FALSE, the change will not
+ take effect until AN is restarted in some other way (link
+ drop, toggle low power, toggle AN enable, toggle soft reset).
+
+ If this is MTD_TRUE and AN has been disabled, it will be
+ enabled before being restarted.
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK if the change was successful, or MTD_FAIL if not
+
+ Description:
+ Sets the speed bits in 1.0 back to the power-on default of 11b
+ (10GBASE-T). Enables auto-negotiation.
+
+ Does a software reset of the T unit and wait until it is complete before
+ enabling AN and returning.
+
+ Notes/Warnings:
+ None.
+
+******************************************************************************/
+MTD_STATUS mtdUndoForcedSpeed
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port,
+ IN MTD_BOOL anRestart
+);
+
+
+/******************************************************************************
+ MTD_STATUS mtdAutonegEnable
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK or MTD_FAIL, if action was successful or not
+
+ Description:
+ Re-enables auto-negotiation.
+
+ Side effects:
+
+ Notes/Warnings:
+ Restart autonegation will not take effect if AN is disabled.
+
+******************************************************************************/
+MTD_STATUS mtdAutonegEnable
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port
+);
+
+
+
+/******************************************************************************
+ MTD_STATUS mtdAutonegRestart
+ (
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port
+ );
+
+ Inputs:
+ devPtr - pointer to MTD_DEV initialized by mtdLoadDriver() call
+ port - MDIO port address, 0-31
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK or MTD_FAIL, depending on if action was successful
+
+ Description:
+ Restarts auto-negotiation. The bit is self-clearing. If the link is up,
+ the link will drop and auto-negotiation will start again.
+
+ Side effects:
+ None.
+
+ Notes/Warnings:
+ Restarting auto-negotiation will have no effect if auto-negotiation is
+ disabled.
+
+ This function is important as it is necessary to restart auto-negotiation
+ after changing many auto-negotiation settings before the changes will take
+ effect.
+
+******************************************************************************/
+MTD_STATUS mtdAutonegRestart
+(
+ IN MTD_DEV_PTR devPtr,
+ IN MTD_U16 port
+);
+
+
+
+/******************************************************************************
+MTD_STATUS mtdIsPhyRevisionValid
+(
+ IN MTD_DEVICE_ID phyRev
+);
+
+
+ Inputs:
+ phyRev - a revision id to be checked against MTD_DEVICE_ID type
+
+ Outputs:
+ None
+
+ Returns:
+ MTD_OK if phyRev is a valid revision, MTD_FAIL otherwise
+
+ Description:
+ Takes phyRev and returns MTD_OK if it is one of the MTD_DEVICE_ID
+ type, otherwise returns MTD_FAIL.
+
+ Side effects:
+ None.
+
+ Notes/Warnings:
+ None
+
+******************************************************************************/
+MTD_STATUS mtdIsPhyRevisionValid
+(
+ IN MTD_DEVICE_ID phyRev
+);
+
+#if C_LINKAGE
+#if defined __cplusplus
+}
+#endif
+#endif
+
+#endif /* _TXGBE_MTD_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_param.c b/drivers/net/ethernet/netswift/txgbe/txgbe_param.c
new file mode 100644
index 0000000000000..214993fb1a9b9
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_param.c
@@ -0,0 +1,1191 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_param.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#include <linux/types.h>
+#include <linux/module.h>
+
+#include "txgbe.h"
+
+/* This is the only thing that needs to be changed to adjust the
+ * maximum number of ports that the driver can manage.
+ */
+#define TXGBE_MAX_NIC 32
+#define OPTION_UNSET -1
+#define OPTION_DISABLED 0
+#define OPTION_ENABLED 1
+
+#define STRINGIFY(foo) #foo /* magic for getting defines into strings */
+#define XSTRINGIFY(bar) STRINGIFY(bar)
+
+#define TXGBE_PARAM_INIT { [0 ... TXGBE_MAX_NIC] = OPTION_UNSET }
+
+#define TXGBE_PARAM(X, desc) \
+ static int X[TXGBE_MAX_NIC+1] = TXGBE_PARAM_INIT; \
+ static unsigned int num_##X; \
+ module_param_array(X, int, &num_##X, 0); \
+ MODULE_PARM_DESC(X, desc);
+
+/* ffe_main (KR/KX4/KX/SFI)
+ *
+ * Valid Range: 0-60
+ *
+ * Default Value: 27
+ */
+TXGBE_PARAM(FFE_MAIN,
+ "TX_EQ MAIN (0 - 40)");
+#define TXGBE_DEFAULT_FFE_MAIN 27
+
+/* ffe_pre
+ *
+ * Valid Range: 0-60
+ *
+ * Default Value: 8
+ */
+
+TXGBE_PARAM(FFE_PRE,
+ "TX_EQ PRE (0 - 40)");
+#define TXGBE_DEFAULT_FFE_PRE 8
+
+/* ffe_post (VF Alloc Mode)
+ *
+ * Valid Range: 0-60
+ *
+ * Default Value: 44
+ */
+
+TXGBE_PARAM(FFE_POST,
+ "TX_EQ POST (0 - 40)");
+#define TXGBE_DEFAULT_FFE_POST 44
+
+/* ffe_set
+ *
+ * Valid Range: 0-4
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(FFE_SET,
+ "TX_EQ SET must choose to take effect (0 = NULL, 1 = sfi, 2 = kr, 3 = kx4, 4 = kx)");
+#define TXGBE_DEFAULT_FFE_SET 0
+
+/* backplane_mode
+ *
+ * Valid Range: 0-4
+ * - 0 - NULL
+ * - 1 - sfi
+ * - 2 - kr
+ * - 3 - kx4
+ * - 4 - kx
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(backplane_mode,
+ "Backplane Mode Support(0 = NULL, 1 = sfi, 2 = kr, 3 = kx4, 4 = kx)");
+
+#define TXGBE_BP_NULL 0
+#define TXGBE_BP_SFI 1
+#define TXGBE_BP_KR 2
+#define TXGBE_BP_KX4 3
+#define TXGBE_BP_KX 4
+#define TXGBE_DEFAULT_BP_MODE TXGBE_BP_NULL
+
+/* backplane_auto
+ *
+ * Valid Range: 0-1
+ * - 0 - NO AUTO
+ * - 1 - AUTO
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(backplane_auto,
+ "Backplane AUTO mode (0 = NO AUTO, 1 = AUTO)");
+
+#define TXGBE_BP_NAUTO 0
+#define TXGBE_BP_AUTO 1
+#define TXGBE_DEFAULT_BP_AUTO -1
+
+/* VF_alloc_mode (VF Alloc Mode)
+ *
+ * Valid Range: 0-1
+ * - 0 - 2 * 64
+ * - 1 - 4 * 32
+ * - 2 - 8 * 16
+ *
+ * Default Value: 2
+ */
+
+TXGBE_PARAM(vf_alloc_mode,
+ "Change VF Alloc Mode (0 = 2*64, 1 = 4*32, 2 = 8*16)");
+
+#define TXGBE_2Q 0
+#define TXGBE_4Q 1
+#define TXGBE_8Q 2
+#define TXGBE_DEFAULT_NUMQ TXGBE_2Q
+
+/* IntMode (Interrupt Mode)
+ *
+ * Valid Range: 0-2
+ * - 0 - Legacy Interrupt
+ * - 1 - MSI Interrupt
+ * - 2 - MSI-X Interrupt(s)
+ *
+ * Default Value: 2
+ */
+
+TXGBE_PARAM(InterruptType,
+ "Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), "
+ "default IntMode (deprecated)");
+
+TXGBE_PARAM(IntMode,
+ "Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), "
+ "default 2");
+
+#define TXGBE_INT_LEGACY 0
+#define TXGBE_INT_MSI 1
+#define TXGBE_INT_MSIX 2
+#define TXGBE_DEFAULT_INT TXGBE_INT_MSIX
+
+/* MQ - Multiple Queue enable/disable
+ *
+ * Valid Range: 0, 1
+ * - 0 - disables MQ
+ * - 1 - enables MQ
+ *
+ * Default Value: 1
+ */
+
+TXGBE_PARAM(MQ,
+ "Disable or enable Multiple Queues, default 1");
+
+/* RSS - Receive-Side Scaling (RSS) Descriptor Queues
+ *
+ * Valid Range: 0-64
+ * - 0 - enables RSS and sets the Desc. Q's to min(64, num_online_cpus()).
+ * - 1-64 - enables RSS and sets the Desc. Q's to the specified value.
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(RSS,
+ "Number of Receive-Side Scaling Descriptor Queues, "
+ "default 0=number of cpus");
+
+/* VMDQ - Virtual Machine Device Queues (VMDQ)
+ *
+ * Valid Range: 1-16
+ * - 1 Disables VMDQ by allocating only a single queue.
+ * - 2-16 - enables VMDQ and sets the Desc. Q's to the specified value.
+ *
+ * Default Value: 1
+ */
+
+#define TXGBE_DEFAULT_NUM_VMDQ 8
+
+TXGBE_PARAM(VMDQ,
+ "Number of Virtual Machine Device Queues: 0/1 = disable, "
+ "2-16 enable (default=" XSTRINGIFY(TXGBE_DEFAULT_NUM_VMDQ) ")");
+
+/* Interrupt Throttle Rate (interrupts/sec)
+ *
+ * Valid Range: 980-500000 (0=off, 1=dynamic)
+ *
+ * Default Value: 1
+ */
+#define DEFAULT_ITR 1
+TXGBE_PARAM(InterruptThrottleRate,
+ "Maximum interrupts per second, per vector, "
+ "(0,1,980-500000), default 1");
+
+#define MAX_ITR TXGBE_MAX_INT_RATE
+#define MIN_ITR TXGBE_MIN_INT_RATE
+
+/* LLIPort (Low Latency Interrupt TCP Port)
+ *
+ * Valid Range: 0 - 65535
+ *
+ * Default Value: 0 (disabled)
+ */
+TXGBE_PARAM(LLIPort,
+ "Low Latency Interrupt TCP Port (0-65535)");
+
+#define DEFAULT_LLIPORT 0
+#define MAX_LLIPORT 0xFFFF
+#define MIN_LLIPORT 0
+
+/* LLISize (Low Latency Interrupt on Packet Size)
+ *
+ * Valid Range: 0 - 1500
+ *
+ * Default Value: 0 (disabled)
+ */
+
+TXGBE_PARAM(LLISize,
+ "Low Latency Interrupt on Packet Size (0-1500)");
+
+#define DEFAULT_LLISIZE 0
+#define MAX_LLISIZE 1500
+#define MIN_LLISIZE 0
+
+/* LLIEType (Low Latency Interrupt Ethernet Type)
+ *
+ * Valid Range: 0 - 0x8fff
+ *
+ * Default Value: 0 (disabled)
+ */
+
+TXGBE_PARAM(LLIEType,
+ "Low Latency Interrupt Ethernet Protocol Type");
+
+#define DEFAULT_LLIETYPE 0
+#define MAX_LLIETYPE 0x8fff
+#define MIN_LLIETYPE 0
+
+/* LLIVLANP (Low Latency Interrupt on VLAN priority threshold)
+ *
+ * Valid Range: 0 - 7
+ *
+ * Default Value: 0 (disabled)
+ */
+
+TXGBE_PARAM(LLIVLANP,
+ "Low Latency Interrupt on VLAN priority threshold");
+
+#define DEFAULT_LLIVLANP 0
+#define MAX_LLIVLANP 7
+#define MIN_LLIVLANP 0
+
+/* Flow Director packet buffer allocation level
+ *
+ * Valid Range: 1-3
+ * 1 = 8k hash/2k perfect,
+ * 2 = 16k hash/4k perfect,
+ * 3 = 32k hash/8k perfect
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(FdirPballoc,
+ "Flow Director packet buffer allocation level:\n"
+ "\t\t\t1 = 8k hash filters or 2k perfect filters\n"
+ "\t\t\t2 = 16k hash filters or 4k perfect filters\n"
+ "\t\t\t3 = 32k hash filters or 8k perfect filters");
+
+#define TXGBE_DEFAULT_FDIR_PBALLOC TXGBE_FDIR_PBALLOC_64K
+
+/* Software ATR packet sample rate
+ *
+ * Valid Range: 0-255 0 = off, 1-255 = rate of Tx packet inspection
+ *
+ * Default Value: 20
+ */
+
+TXGBE_PARAM(AtrSampleRate,
+ "Software ATR Tx packet sample rate");
+
+#define TXGBE_MAX_ATR_SAMPLE_RATE 255
+#define TXGBE_MIN_ATR_SAMPLE_RATE 1
+#define TXGBE_ATR_SAMPLE_RATE_OFF 0
+#define TXGBE_DEFAULT_ATR_SAMPLE_RATE 20
+
+/* Enable/disable Large Receive Offload
+ *
+ * Valid Values: 0(off), 1(on)
+ *
+ * Default Value: 1
+ */
+
+TXGBE_PARAM(LRO,
+ "Large Receive Offload (0,1), default 1 = on");
+
+/* Enable/disable support for untested SFP+ modules on adapters
+ *
+ * Valid Values: 0(Disable), 1(Enable)
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(allow_unsupported_sfp,
+ "Allow unsupported and untested "
+ "SFP+ modules on adapters, default 0 = Disable");
+
+/* Enable/disable support for DMA coalescing
+ *
+ * Valid Values: 0(off), 41 - 10000(on)
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(dmac_watchdog,
+ "DMA coalescing watchdog in microseconds (0,41-10000),"
+ "default 0 = off");
+
+/* Enable/disable support for VXLAN rx checksum offload
+ *
+ * Valid Values: 0(Disable), 1(Enable)
+ *
+ * Default Value: 1 on hardware that supports it
+ */
+
+TXGBE_PARAM(vxlan_rx,
+ "VXLAN receive checksum offload (0,1), default 1 = Enable");
+
+/* Rx buffer mode
+ *
+ * Valid Range: 0-1 0 = no header split, 1 = hdr split
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(RxBufferMode,
+ "0=(default)no header split\n"
+ "\t\t\t1=hdr split for recognized packet\n");
+
+#define TXGBE_RXBUFMODE_NO_HEADER_SPLIT 0
+#define TXGBE_RXBUFMODE_HEADER_SPLIT 1
+#define TXGBE_DEFAULT_RXBUFMODE TXGBE_RXBUFMODE_NO_HEADER_SPLIT
+
+/* Cloud Switch mode
+ *
+ * Valid Range: 0-1 0 = disable Cloud Switch, 1 = enable Cloud Switch
+ *
+ * Default Value: 0
+ */
+
+TXGBE_PARAM(CloudSwitch,
+ "Cloud Switch (0,1), default 0 = disable, 1 = enable");
+
+struct txgbe_option {
+ enum { enable_option, range_option, list_option } type;
+ const char *name;
+ const char *err;
+ const char *msg;
+ int def;
+ union {
+ struct { /* range_option info */
+ int min;
+ int max;
+ } r;
+ struct { /* list_option info */
+ int nr;
+ const struct txgbe_opt_list {
+ int i;
+ char *str;
+ } *p;
+ } l;
+ } arg;
+};
+
+static int txgbe_validate_option(u32 *value,
+ struct txgbe_option *opt)
+{
+ int val = (int)*value;
+
+ if (val == OPTION_UNSET) {
+ txgbe_info("txgbe: Invalid %s specified (%d), %s\n",
+ opt->name, val, opt->err);
+ *value = (u32)opt->def;
+ return 0;
+ }
+
+ switch (opt->type) {
+ case enable_option:
+ switch (val) {
+ case OPTION_ENABLED:
+ txgbe_info("txgbe: %s Enabled\n", opt->name);
+ return 0;
+ case OPTION_DISABLED:
+ txgbe_info("txgbe: %s Disabled\n", opt->name);
+ return 0;
+ }
+ break;
+ case range_option:
+ if ((val >= opt->arg.r.min && val <= opt->arg.r.max) ||
+ val == opt->def) {
+ if (opt->msg)
+ txgbe_info("txgbe: %s set to %d, %s\n",
+ opt->name, val, opt->msg);
+ else
+ txgbe_info("txgbe: %s set to %d\n",
+ opt->name, val);
+ return 0;
+ }
+ break;
+ case list_option: {
+ int i;
+ const struct txgbe_opt_list *ent;
+
+ for (i = 0; i < opt->arg.l.nr; i++) {
+ ent = &opt->arg.l.p[i];
+ if (val == ent->i) {
+ if (ent->str[0] != '\0')
+ txgbe_info("%s\n", ent->str);
+ return 0;
+ }
+ }
+ }
+ break;
+ default:
+ BUG_ON(1);
+ }
+
+ txgbe_info("txgbe: Invalid %s specified (%d), %s\n",
+ opt->name, val, opt->err);
+ *value = (u32)opt->def;
+ return -1;
+}
+
+/**
+ * txgbe_check_options - Range Checking for Command Line Parameters
+ * @adapter: board private structure
+ *
+ * This routine checks all command line parameters for valid user
+ * input. If an invalid value is given, or if no user specified
+ * value exists, a default value is used. The final value is stored
+ * in a variable in the adapter structure.
+ **/
+void txgbe_check_options(struct txgbe_adapter *adapter)
+{
+ u32 bd = adapter->bd_number;
+ u32 *aflags = &adapter->flags;
+ struct txgbe_ring_feature *feature = adapter->ring_feature;
+ u32 vmdq;
+
+ if (bd >= TXGBE_MAX_NIC) {
+ txgbe_notice(
+ "Warning: no configuration for board #%d\n", bd);
+ txgbe_notice("Using defaults for all values\n");
+ }
+ { /* MAIN */
+ u32 ffe_main;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "FFE_MAIN",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_MAIN),
+ .def = TXGBE_DEFAULT_FFE_MAIN,
+ .arg = { .r = { .min = 0,
+ .max = 60} }
+ };
+
+ if (num_FFE_MAIN > bd) {
+ ffe_main = FFE_MAIN[bd];
+ if (ffe_main == OPTION_UNSET)
+ ffe_main = FFE_MAIN[bd];
+ txgbe_validate_option(&ffe_main, &opt);
+ adapter->ffe_main = ffe_main;
+ } else {
+ adapter->ffe_main = 27;
+ }
+ }
+
+ { /* PRE */
+ u32 ffe_pre;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "FFE_PRE",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_PRE),
+ .def = TXGBE_DEFAULT_FFE_PRE,
+ .arg = { .r = { .min = 0,
+ .max = 60} }
+ };
+
+ if (num_FFE_PRE > bd) {
+ ffe_pre = FFE_PRE[bd];
+ if (ffe_pre == OPTION_UNSET)
+ ffe_pre = FFE_PRE[bd];
+ txgbe_validate_option(&ffe_pre, &opt);
+ adapter->ffe_pre = ffe_pre;
+ } else {
+ adapter->ffe_pre = 8;
+ }
+ }
+
+ { /* POST */
+ u32 ffe_post;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "FFE_POST",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_POST),
+ .def = TXGBE_DEFAULT_FFE_POST,
+ .arg = { .r = { .min = 0,
+ .max = 60} }
+ };
+
+ if (num_FFE_POST > bd) {
+ ffe_post = FFE_POST[bd];
+ if (ffe_post == OPTION_UNSET)
+ ffe_post = FFE_POST[bd];
+ txgbe_validate_option(&ffe_post, &opt);
+ adapter->ffe_post = ffe_post;
+ } else {
+ adapter->ffe_post = 44;
+ }
+ }
+
+ { /* ffe_set */
+ u32 ffe_set;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "FFE_SET",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_FFE_SET),
+ .def = TXGBE_DEFAULT_FFE_SET,
+ .arg = { .r = { .min = 0,
+ .max = 4} }
+ };
+
+ if (num_FFE_SET > bd) {
+ ffe_set = FFE_SET[bd];
+ if (ffe_set == OPTION_UNSET)
+ ffe_set = FFE_SET[bd];
+ txgbe_validate_option(&ffe_set, &opt);
+ adapter->ffe_set = ffe_set;
+ } else {
+ adapter->ffe_set = 0;
+ }
+ }
+
+ { /* backplane_mode */
+ u32 bp_mode;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "backplane_mode",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_BP_MODE),
+ .def = TXGBE_DEFAULT_BP_MODE,
+ .arg = { .r = { .min = 0,
+ .max = 4} }
+ };
+
+ if (num_backplane_mode > bd) {
+ bp_mode = backplane_mode[bd];
+ if (bp_mode == OPTION_UNSET)
+ bp_mode = backplane_mode[bd];
+ txgbe_validate_option(&bp_mode, &opt);
+ adapter->backplane_mode = bp_mode;
+ } else {
+ adapter->backplane_mode = 0;
+ }
+ }
+
+ { /* auto mode */
+ u32 bp_auto;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "bp_auto",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_BP_AUTO),
+ .def = TXGBE_DEFAULT_BP_AUTO,
+ .arg = { .r = { .min = 0,
+ .max = 2} }
+ };
+
+ if (num_backplane_auto > bd) {
+ bp_auto = backplane_auto[bd];
+ if (bp_auto == OPTION_UNSET)
+ bp_auto = backplane_auto[bd];
+ txgbe_validate_option(&bp_auto, &opt);
+ adapter->backplane_auto = bp_auto;
+ } else {
+ adapter->backplane_auto = -1;
+ }
+ }
+
+ { /* VF_alloc_mode */
+ u32 vf_mode;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "vf_alloc_mode",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_NUMQ),
+ .def = TXGBE_DEFAULT_NUMQ,
+ .arg = { .r = { .min = TXGBE_2Q,
+ .max = TXGBE_8Q} }
+ };
+
+ if (num_vf_alloc_mode > bd) {
+ vf_mode = vf_alloc_mode[bd];
+ if (vf_mode == OPTION_UNSET)
+ vf_mode = vf_alloc_mode[bd];
+ txgbe_validate_option(&vf_mode, &opt);
+ switch (vf_mode) {
+ case TXGBE_8Q:
+ adapter->vf_mode = 15;
+ break;
+ case TXGBE_4Q:
+ adapter->vf_mode = 31;
+ break;
+ case TXGBE_2Q:
+ default:
+ adapter->vf_mode = 63;
+ break;
+ }
+ } else {
+ adapter->vf_mode = 63;
+ }
+ }
+ { /* Interrupt Mode */
+ u32 int_mode;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Interrupt Mode",
+ .err =
+ "using default of "__MODULE_STRING(TXGBE_DEFAULT_INT),
+ .def = TXGBE_DEFAULT_INT,
+ .arg = { .r = { .min = TXGBE_INT_LEGACY,
+ .max = TXGBE_INT_MSIX} }
+ };
+
+ if (num_IntMode > bd || num_InterruptType > bd) {
+ int_mode = IntMode[bd];
+ if (int_mode == OPTION_UNSET)
+ int_mode = InterruptType[bd];
+ txgbe_validate_option(&int_mode, &opt);
+ switch (int_mode) {
+ case TXGBE_INT_MSIX:
+ if (!(*aflags & TXGBE_FLAG_MSIX_CAPABLE))
+ txgbe_info(
+ "Ignoring MSI-X setting; "
+ "support unavailable\n");
+ break;
+ case TXGBE_INT_MSI:
+ if (!(*aflags & TXGBE_FLAG_MSI_CAPABLE)) {
+ txgbe_info(
+ "Ignoring MSI setting; "
+ "support unavailable\n");
+ } else {
+ *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE;
+ }
+ break;
+ case TXGBE_INT_LEGACY:
+ default:
+ *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE;
+ *aflags &= ~TXGBE_FLAG_MSI_CAPABLE;
+ break;
+ }
+ } else {
+ /* default settings */
+ if (opt.def == TXGBE_INT_MSIX &&
+ *aflags & TXGBE_FLAG_MSIX_CAPABLE) {
+ *aflags |= TXGBE_FLAG_MSIX_CAPABLE;
+ *aflags |= TXGBE_FLAG_MSI_CAPABLE;
+ } else if (opt.def == TXGBE_INT_MSI &&
+ *aflags & TXGBE_FLAG_MSI_CAPABLE) {
+ *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE;
+ *aflags |= TXGBE_FLAG_MSI_CAPABLE;
+ } else {
+ *aflags &= ~TXGBE_FLAG_MSIX_CAPABLE;
+ *aflags &= ~TXGBE_FLAG_MSI_CAPABLE;
+ }
+ }
+ }
+ { /* Multiple Queue Support */
+ static struct txgbe_option opt = {
+ .type = enable_option,
+ .name = "Multiple Queue Support",
+ .err = "defaulting to Enabled",
+ .def = OPTION_ENABLED
+ };
+
+ if (num_MQ > bd) {
+ u32 mq = MQ[bd];
+ txgbe_validate_option(&mq, &opt);
+ if (mq)
+ *aflags |= TXGBE_FLAG_MQ_CAPABLE;
+ else
+ *aflags &= ~TXGBE_FLAG_MQ_CAPABLE;
+ } else {
+ if (opt.def == OPTION_ENABLED)
+ *aflags |= TXGBE_FLAG_MQ_CAPABLE;
+ else
+ *aflags &= ~TXGBE_FLAG_MQ_CAPABLE;
+ }
+ /* Check Interoperability */
+ if ((*aflags & TXGBE_FLAG_MQ_CAPABLE) &&
+ !(*aflags & TXGBE_FLAG_MSIX_CAPABLE)) {
+ DPRINTK(PROBE, INFO,
+ "Multiple queues are not supported while MSI-X "
+ "is disabled. Disabling Multiple Queues.\n");
+ *aflags &= ~TXGBE_FLAG_MQ_CAPABLE;
+ }
+ }
+
+ { /* Receive-Side Scaling (RSS) */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Receive-Side Scaling (RSS)",
+ .err = "using default.",
+ .def = 0,
+ .arg = { .r = { .min = 0,
+ .max = 1} }
+ };
+ u32 rss = RSS[bd];
+ /* adjust Max allowed RSS queues based on MAC type */
+ opt.arg.r.max = txgbe_max_rss_indices(adapter);
+
+ if (num_RSS > bd) {
+ txgbe_validate_option(&rss, &opt);
+ /* base it off num_online_cpus() with hardware limit */
+ if (!rss)
+ rss = min_t(int, opt.arg.r.max,
+ num_online_cpus());
+ else
+ feature[RING_F_FDIR].limit = (u16)rss;
+
+ feature[RING_F_RSS].limit = (u16)rss;
+ } else if (opt.def == 0) {
+ rss = min_t(int, txgbe_max_rss_indices(adapter),
+ num_online_cpus());
+ feature[RING_F_RSS].limit = rss;
+ }
+ /* Check Interoperability */
+ if (rss > 1) {
+ if (!(*aflags & TXGBE_FLAG_MQ_CAPABLE)) {
+ DPRINTK(PROBE, INFO,
+ "Multiqueue is disabled. "
+ "Limiting RSS.\n");
+ feature[RING_F_RSS].limit = 1;
+ }
+ }
+ adapter->flags2 |= TXGBE_FLAG2_RSS_ENABLED;
+ }
+ { /* Virtual Machine Device Queues (VMDQ) */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Virtual Machine Device Queues (VMDQ)",
+ .err = "defaulting to Disabled",
+ .def = OPTION_DISABLED,
+ .arg = { .r = { .min = OPTION_DISABLED,
+ .max = TXGBE_MAX_VMDQ_INDICES
+ } }
+ };
+
+ if (num_VMDQ > bd) {
+ vmdq = VMDQ[bd];
+
+ txgbe_validate_option(&vmdq, &opt);
+
+ /* zero or one both mean disabled from our driver's
+ * perspective */
+ if (vmdq > 1) {
+ *aflags |= TXGBE_FLAG_VMDQ_ENABLED;
+ } else
+ *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED;
+
+ feature[RING_F_VMDQ].limit = (u16)vmdq;
+ } else {
+ if (opt.def == OPTION_DISABLED)
+ *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED;
+ else
+ *aflags |= TXGBE_FLAG_VMDQ_ENABLED;
+
+ feature[RING_F_VMDQ].limit = opt.def;
+ }
+ /* Check Interoperability */
+ if (*aflags & TXGBE_FLAG_VMDQ_ENABLED) {
+ if (!(*aflags & TXGBE_FLAG_MQ_CAPABLE)) {
+ DPRINTK(PROBE, INFO,
+ "VMDQ is not supported while multiple "
+ "queues are disabled. "
+ "Disabling VMDQ.\n");
+ *aflags &= ~TXGBE_FLAG_VMDQ_ENABLED;
+ feature[RING_F_VMDQ].limit = 0;
+ }
+ }
+ }
+
+ { /* Interrupt Throttling Rate */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Interrupt Throttling Rate (ints/sec)",
+ .err = "using default of "__MODULE_STRING(DEFAULT_ITR),
+ .def = DEFAULT_ITR,
+ .arg = { .r = { .min = MIN_ITR,
+ .max = MAX_ITR } }
+ };
+
+ if (num_InterruptThrottleRate > bd) {
+ u32 itr = InterruptThrottleRate[bd];
+ switch (itr) {
+ case 0:
+ DPRINTK(PROBE, INFO, "%s turned off\n",
+ opt.name);
+ adapter->rx_itr_setting = 0;
+ break;
+ case 1:
+ DPRINTK(PROBE, INFO, "dynamic interrupt "
+ "throttling enabled\n");
+ adapter->rx_itr_setting = 1;
+ break;
+ default:
+ txgbe_validate_option(&itr, &opt);
+ /* the first bit is used as control */
+ adapter->rx_itr_setting = (u16)((1000000/itr) << 2);
+ break;
+ }
+ adapter->tx_itr_setting = adapter->rx_itr_setting;
+ } else {
+ adapter->rx_itr_setting = opt.def;
+ adapter->tx_itr_setting = opt.def;
+ }
+ }
+
+ { /* Low Latency Interrupt TCP Port*/
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Low Latency Interrupt TCP Port",
+ .err = "using default of "
+ __MODULE_STRING(DEFAULT_LLIPORT),
+ .def = DEFAULT_LLIPORT,
+ .arg = { .r = { .min = MIN_LLIPORT,
+ .max = MAX_LLIPORT } }
+ };
+
+ if (num_LLIPort > bd) {
+ adapter->lli_port = LLIPort[bd];
+ if (adapter->lli_port) {
+ txgbe_validate_option(&adapter->lli_port, &opt);
+ } else {
+ DPRINTK(PROBE, INFO, "%s turned off\n",
+ opt.name);
+ }
+ } else {
+ adapter->lli_port = opt.def;
+ }
+ }
+ { /* Low Latency Interrupt on Packet Size */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Low Latency Interrupt on Packet Size",
+ .err = "using default of "
+ __MODULE_STRING(DEFAULT_LLISIZE),
+ .def = DEFAULT_LLISIZE,
+ .arg = { .r = { .min = MIN_LLISIZE,
+ .max = MAX_LLISIZE } }
+ };
+
+ if (num_LLISize > bd) {
+ adapter->lli_size = LLISize[bd];
+ if (adapter->lli_size) {
+ txgbe_validate_option(&adapter->lli_size, &opt);
+ } else {
+ DPRINTK(PROBE, INFO, "%s turned off\n",
+ opt.name);
+ }
+ } else {
+ adapter->lli_size = opt.def;
+ }
+ }
+ { /* Low Latency Interrupt EtherType*/
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Low Latency Interrupt on Ethernet Protocol "
+ "Type",
+ .err = "using default of "
+ __MODULE_STRING(DEFAULT_LLIETYPE),
+ .def = DEFAULT_LLIETYPE,
+ .arg = { .r = { .min = MIN_LLIETYPE,
+ .max = MAX_LLIETYPE } }
+ };
+
+ if (num_LLIEType > bd) {
+ adapter->lli_etype = LLIEType[bd];
+ if (adapter->lli_etype) {
+ txgbe_validate_option(&adapter->lli_etype,
+ &opt);
+ } else {
+ DPRINTK(PROBE, INFO, "%s turned off\n",
+ opt.name);
+ }
+ } else {
+ adapter->lli_etype = opt.def;
+ }
+ }
+ { /* LLI VLAN Priority */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Low Latency Interrupt on VLAN priority "
+ "threshold",
+ .err = "using default of "
+ __MODULE_STRING(DEFAULT_LLIVLANP),
+ .def = DEFAULT_LLIVLANP,
+ .arg = { .r = { .min = MIN_LLIVLANP,
+ .max = MAX_LLIVLANP } }
+ };
+
+ if (num_LLIVLANP > bd) {
+ adapter->lli_vlan_pri = LLIVLANP[bd];
+ if (adapter->lli_vlan_pri) {
+ txgbe_validate_option(&adapter->lli_vlan_pri,
+ &opt);
+ } else {
+ DPRINTK(PROBE, INFO, "%s turned off\n",
+ opt.name);
+ }
+ } else {
+ adapter->lli_vlan_pri = opt.def;
+ }
+ }
+
+ { /* Flow Director packet buffer allocation */
+ u32 fdir_pballoc_mode;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Flow Director packet buffer allocation",
+ .err = "using default of "
+ __MODULE_STRING(TXGBE_DEFAULT_FDIR_PBALLOC),
+ .def = TXGBE_DEFAULT_FDIR_PBALLOC,
+ .arg = {.r = {.min = TXGBE_FDIR_PBALLOC_64K,
+ .max = TXGBE_FDIR_PBALLOC_256K} }
+ };
+ const char *pstring;
+
+ if (num_FdirPballoc > bd) {
+ fdir_pballoc_mode = FdirPballoc[bd];
+ txgbe_validate_option(&fdir_pballoc_mode, &opt);
+ switch (fdir_pballoc_mode) {
+ case TXGBE_FDIR_PBALLOC_256K:
+ adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_256K;
+ pstring = "256kB";
+ break;
+ case TXGBE_FDIR_PBALLOC_128K:
+ adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_128K;
+ pstring = "128kB";
+ break;
+ case TXGBE_FDIR_PBALLOC_64K:
+ default:
+ adapter->fdir_pballoc = TXGBE_FDIR_PBALLOC_64K;
+ pstring = "64kB";
+ break;
+ }
+ DPRINTK(PROBE, INFO, "Flow Director will be allocated "
+ "%s of packet buffer\n", pstring);
+ } else {
+ adapter->fdir_pballoc = opt.def;
+ }
+
+ }
+ { /* Flow Director ATR Tx sample packet rate */
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Software ATR Tx packet sample rate",
+ .err = "using default of "
+ __MODULE_STRING(TXGBE_DEFAULT_ATR_SAMPLE_RATE),
+ .def = TXGBE_DEFAULT_ATR_SAMPLE_RATE,
+ .arg = {.r = {.min = TXGBE_ATR_SAMPLE_RATE_OFF,
+ .max = TXGBE_MAX_ATR_SAMPLE_RATE} }
+ };
+ static const char atr_string[] =
+ "ATR Tx Packet sample rate set to";
+
+ if (num_AtrSampleRate > bd) {
+ adapter->atr_sample_rate = AtrSampleRate[bd];
+
+ if (adapter->atr_sample_rate) {
+ txgbe_validate_option(&adapter->atr_sample_rate,
+ &opt);
+ DPRINTK(PROBE, INFO, "%s %d\n", atr_string,
+ adapter->atr_sample_rate);
+ }
+ } else {
+ adapter->atr_sample_rate = opt.def;
+ }
+ }
+
+ { /* LRO - Set Large Receive Offload */
+ struct txgbe_option opt = {
+ .type = enable_option,
+ .name = "LRO - Large Receive Offload",
+ .err = "defaulting to Disabled",
+ .def = OPTION_ENABLED
+ };
+ struct net_device *netdev = adapter->netdev;
+
+ if (!(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE))
+ opt.def = OPTION_DISABLED;
+ if (num_LRO > bd) {
+ u32 lro = LRO[bd];
+ txgbe_validate_option(&lro, &opt);
+ if (lro)
+ netdev->features |= NETIF_F_LRO;
+ else
+ netdev->features &= ~NETIF_F_LRO;
+ } else if (opt.def == OPTION_ENABLED) {
+ netdev->features |= NETIF_F_LRO;
+ } else {
+ netdev->features &= ~NETIF_F_LRO;
+ }
+
+ if ((netdev->features & NETIF_F_LRO) &&
+ !(adapter->flags2 & TXGBE_FLAG2_RSC_CAPABLE)) {
+ DPRINTK(PROBE, INFO,
+ "RSC is not supported on this "
+ "hardware. Disabling RSC.\n");
+ netdev->features &= ~NETIF_F_LRO;
+ }
+ }
+ { /*
+ * allow_unsupported_sfp - Enable/Disable support for unsupported
+ * and untested SFP+ modules.
+ */
+ struct txgbe_option opt = {
+ .type = enable_option,
+ .name = "allow_unsupported_sfp",
+ .err = "defaulting to Disabled",
+ .def = OPTION_DISABLED
+ };
+ if (num_allow_unsupported_sfp > bd) {
+ u32 enable_unsupported_sfp =
+ allow_unsupported_sfp[bd];
+ txgbe_validate_option(&enable_unsupported_sfp, &opt);
+ if (enable_unsupported_sfp) {
+ adapter->hw.allow_unsupported_sfp = true;
+ } else {
+ adapter->hw.allow_unsupported_sfp = false;
+ }
+ } else if (opt.def == OPTION_ENABLED) {
+ adapter->hw.allow_unsupported_sfp = true;
+ } else {
+ adapter->hw.allow_unsupported_sfp = false;
+ }
+ }
+
+ { /* DMA Coalescing */
+ struct txgbe_option opt = {
+ .type = range_option,
+ .name = "dmac_watchdog",
+ .err = "defaulting to 0 (disabled)",
+ .def = 0,
+ .arg = { .r = { .min = 41, .max = 10000 } },
+ };
+ const char *cmsg = "DMA coalescing not supported on this "
+ "hardware";
+
+ opt.err = cmsg;
+ opt.msg = cmsg;
+ opt.arg.r.min = 0;
+ opt.arg.r.max = 0;
+
+ if (num_dmac_watchdog > bd) {
+ u32 dmac_wd = dmac_watchdog[bd];
+
+ txgbe_validate_option(&dmac_wd, &opt);
+ adapter->hw.mac.dmac_config.watchdog_timer = (u16)dmac_wd;
+ } else {
+ adapter->hw.mac.dmac_config.watchdog_timer = opt.def;
+ }
+ }
+ { /* VXLAN rx offload */
+ struct txgbe_option opt = {
+ .type = range_option,
+ .name = "vxlan_rx",
+ .err = "defaulting to 1 (enabled)",
+ .def = 1,
+ .arg = { .r = { .min = 0, .max = 1 } },
+ };
+ const char *cmsg = "VXLAN rx offload not supported on this "
+ "hardware";
+ const u32 flag = TXGBE_FLAG_VXLAN_OFFLOAD_ENABLE;
+
+ if (!(adapter->flags & TXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE)) {
+ opt.err = cmsg;
+ opt.msg = cmsg;
+ opt.def = 0;
+ opt.arg.r.max = 0;
+ }
+ if (num_vxlan_rx > bd) {
+ u32 enable_vxlan_rx = vxlan_rx[bd];
+
+ txgbe_validate_option(&enable_vxlan_rx, &opt);
+ if (enable_vxlan_rx)
+ adapter->flags |= flag;
+ else
+ adapter->flags &= ~flag;
+ } else if (opt.def) {
+ adapter->flags |= flag;
+ } else {
+ adapter->flags &= ~flag;
+ }
+ }
+
+ { /* Rx buffer mode */
+ u32 rx_buf_mode;
+ static struct txgbe_option opt = {
+ .type = range_option,
+ .name = "Rx buffer mode",
+ .err = "using default of "
+ __MODULE_STRING(TXGBE_DEFAULT_RXBUFMODE),
+ .def = TXGBE_DEFAULT_RXBUFMODE,
+ .arg = {.r = {.min = TXGBE_RXBUFMODE_NO_HEADER_SPLIT,
+ .max = TXGBE_RXBUFMODE_HEADER_SPLIT} }
+
+ };
+
+ if (num_RxBufferMode > bd) {
+ rx_buf_mode = RxBufferMode[bd];
+ txgbe_validate_option(&rx_buf_mode, &opt);
+ switch (rx_buf_mode) {
+ case TXGBE_RXBUFMODE_NO_HEADER_SPLIT:
+ *aflags &= ~TXGBE_FLAG_RX_HS_ENABLED;
+ break;
+ case TXGBE_RXBUFMODE_HEADER_SPLIT:
+ *aflags |= TXGBE_FLAG_RX_HS_ENABLED;
+ break;
+ default:
+ break;
+ }
+ } else {
+ *aflags &= ~TXGBE_FLAG_RX_HS_ENABLED;
+ }
+
+ }
+ { /* Cloud Switch */
+ struct txgbe_option opt = {
+ .type = range_option,
+ .name = "CloudSwitch",
+ .err = "defaulting to 0 (disabled)",
+ .def = 0,
+ .arg = { .r = { .min = 0, .max = 1 } },
+ };
+
+ if (num_CloudSwitch > bd) {
+ u32 enable_cloudswitch = CloudSwitch[bd];
+
+ txgbe_validate_option(&enable_cloudswitch, &opt);
+ if (enable_cloudswitch)
+ adapter->flags |=
+ TXGBE_FLAG2_CLOUD_SWITCH_ENABLED;
+ else
+ adapter->flags &=
+ ~TXGBE_FLAG2_CLOUD_SWITCH_ENABLED;
+ } else if (opt.def) {
+ adapter->flags |= TXGBE_FLAG2_CLOUD_SWITCH_ENABLED;
+ } else {
+ adapter->flags &= ~TXGBE_FLAG2_CLOUD_SWITCH_ENABLED;
+ }
+ }
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
new file mode 100644
index 0000000000000..2db6541f95a18
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.c
@@ -0,0 +1,1014 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_phy.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+#include "txgbe_phy.h"
+#include "txgbe_mtd.h"
+
+/**
+ * txgbe_check_reset_blocked - check status of MNG FW veto bit
+ * @hw: pointer to the hardware structure
+ *
+ * This function checks the MMNGC.MNG_VETO bit to see if there are
+ * any constraints on link from manageability. For MAC's that don't
+ * have this bit just return faluse since the link can not be blocked
+ * via this method.
+ **/
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw)
+{
+ u32 mmngc;
+
+ DEBUGFUNC("\n");
+
+ mmngc = rd32(hw, TXGBE_MIS_ST);
+ if (mmngc & TXGBE_MIS_ST_MNG_VETO) {
+ ERROR_REPORT1(TXGBE_ERROR_SOFTWARE,
+ "MNG_VETO bit detected.\n");
+ return true;
+ }
+
+ return false;
+}
+
+
+/**
+ * txgbe_get_phy_id - Get the phy type
+ * @hw: pointer to hardware structure
+ *
+ **/
+s32 txgbe_get_phy_id(struct txgbe_hw *hw)
+{
+ u32 status;
+ u16 phy_id_high = 0;
+ u16 phy_id_low = 0;
+ u8 numport, thisport;
+ DEBUGFUNC("\n");
+
+ status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr,
+ TXGBE_MDIO_PMA_PMD_DEV_TYPE,
+ TXGBE_MDIO_PHY_ID_HIGH, &phy_id_high);
+
+ if (status == 0) {
+ hw->phy.id = (u32)(phy_id_high << 16);
+ status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr,
+ TXGBE_MDIO_PMA_PMD_DEV_TYPE,
+ TXGBE_MDIO_PHY_ID_LOW, &phy_id_low);
+ hw->phy.id |= (u32)(phy_id_low & TXGBE_PHY_REVISION_MASK);
+ }
+
+ if (status == 0) {
+ status = mtdGetPhyRevision(&hw->phy_dev, hw->phy.addr,
+ (MTD_DEVICE_ID *)&hw->phy.revision, &numport, &thisport);
+ if (status == MTD_FAIL) {
+ ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE,
+ "Error in mtdGetPhyRevision()\n");
+ }
+ }
+ return status;
+}
+
+/**
+ * txgbe_get_phy_type_from_id - Get the phy type
+ * @phy_id: PHY ID information
+ *
+ **/
+enum txgbe_phy_type txgbe_get_phy_type_from_id(struct txgbe_hw *hw)
+{
+ enum txgbe_phy_type phy_type;
+ u16 ext_ability = 0;
+
+ DEBUGFUNC("\n");
+
+ switch (hw->phy.id) {
+ case TN1010_PHY_ID:
+ phy_type = txgbe_phy_tn;
+ break;
+ case QT2022_PHY_ID:
+ phy_type = txgbe_phy_qt;
+ break;
+ case ATH_PHY_ID:
+ phy_type = txgbe_phy_nl;
+ break;
+ default:
+ phy_type = txgbe_phy_unknown;
+ break;
+ }
+ if (phy_type == txgbe_phy_unknown) {
+ mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr,
+ TXGBE_MDIO_PMA_PMD_DEV_TYPE,
+ TXGBE_MDIO_PHY_EXT_ABILITY, &ext_ability);
+
+ if (ext_ability & (TXGBE_MDIO_PHY_10GBASET_ABILITY |
+ TXGBE_MDIO_PHY_1000BASET_ABILITY))
+ phy_type = txgbe_phy_cu_unknown;
+ else
+ phy_type = txgbe_phy_generic;
+ }
+ return phy_type;
+}
+
+/**
+ * txgbe_reset_phy - Performs a PHY reset
+ * @hw: pointer to hardware structure
+ **/
+s32 txgbe_reset_phy(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+
+ if (status != 0 || hw->phy.type == txgbe_phy_none)
+ goto out;
+
+ /* Don't reset PHY if it's shut down due to overtemp. */
+ if (!hw->phy.reset_if_overtemp &&
+ (TXGBE_ERR_OVERTEMP == TCALL(hw, phy.ops.check_overtemp)))
+ goto out;
+
+ /* Blocked by MNG FW so bail */
+ txgbe_check_reset_blocked(hw);
+ if (((hw->subsystem_device_id & TXGBE_NCSI_MASK) == TXGBE_NCSI_SUP) ||
+ ((hw->subsystem_device_id & TXGBE_WOL_MASK) == TXGBE_WOL_SUP))
+ goto out;
+
+ status = mtdHardwareReset(&hw->phy_dev, hw->phy.addr, 1000);
+
+out:
+ return status;
+}
+
+/**
+ * txgbe_read_phy_mdi - Reads a value from a specified PHY register without
+ * the SWFW lock
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit address of PHY register to read
+ * @phy_data: Pointer to read data from PHY register
+ **/
+s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data)
+{
+ u32 command;
+ s32 status = 0;
+
+ /* setup and write the address cycle command */
+ command = TXGBE_MSCA_RA(reg_addr) |
+ TXGBE_MSCA_PA(hw->phy.addr) |
+ TXGBE_MSCA_DA(device_type);
+ wr32(hw, TXGBE_MSCA, command);
+
+ command = TXGBE_MSCC_CMD(TXGBE_MSCA_CMD_READ) | TXGBE_MSCC_BUSY;
+ wr32(hw, TXGBE_MSCC, command);
+
+ /* wait to complete */
+ status = po32m(hw, TXGBE_MSCC,
+ TXGBE_MSCC_BUSY, ~TXGBE_MSCC_BUSY,
+ TXGBE_MDIO_TIMEOUT, 10);
+ if (status != 0) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "PHY address command did not complete.\n");
+ return TXGBE_ERR_PHY;
+ }
+
+ /* read data from MSCC */
+ *phy_data = 0xFFFF & rd32(hw, TXGBE_MSCC);
+
+ return 0;
+}
+
+/**
+ * txgbe_read_phy_reg - Reads a value from a specified PHY register
+ * using the SWFW lock - this function is needed in most cases
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit address of PHY register to read
+ * @phy_data: Pointer to read data from PHY register
+ **/
+s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data)
+{
+ s32 status;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ DEBUGFUNC("\n");
+
+ if (0 == TCALL(hw, mac.ops.acquire_swfw_sync, gssr)) {
+ status = txgbe_read_phy_reg_mdi(hw, reg_addr, device_type,
+ phy_data);
+ TCALL(hw, mac.ops.release_swfw_sync, gssr);
+ } else {
+ status = TXGBE_ERR_SWFW_SYNC;
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_write_phy_reg_mdi - Writes a value to specified PHY register
+ * without SWFW lock
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 5 bit device type
+ * @phy_data: Data to write to the PHY register
+ **/
+s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data)
+{
+ u32 command;
+ s32 status = 0;
+
+ /* setup and write the address cycle command */
+ command = TXGBE_MSCA_RA(reg_addr) |
+ TXGBE_MSCA_PA(hw->phy.addr) |
+ TXGBE_MSCA_DA(device_type);
+ wr32(hw, TXGBE_MSCA, command);
+
+ command = phy_data | TXGBE_MSCC_CMD(TXGBE_MSCA_CMD_WRITE) |
+ TXGBE_MSCC_BUSY;
+ wr32(hw, TXGBE_MSCC, command);
+
+ /* wait to complete */
+ status = po32m(hw, TXGBE_MSCC,
+ TXGBE_MSCC_BUSY, ~TXGBE_MSCC_BUSY,
+ TXGBE_MDIO_TIMEOUT, 10);
+ if (status != 0) {
+ ERROR_REPORT1(TXGBE_ERROR_POLLING,
+ "PHY address command did not complete.\n");
+ return TXGBE_ERR_PHY;
+ }
+
+ return 0;
+}
+
+/**
+ * txgbe_write_phy_reg - Writes a value to specified PHY register
+ * using SWFW lock- this function is needed in most cases
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 5 bit device type
+ * @phy_data: Data to write to the PHY register
+ **/
+s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data)
+{
+ s32 status;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ DEBUGFUNC("\n");
+
+ if (TCALL(hw, mac.ops.acquire_swfw_sync, gssr) == 0) {
+ status = txgbe_write_phy_reg_mdi(hw, reg_addr, device_type,
+ phy_data);
+ TCALL(hw, mac.ops.release_swfw_sync, gssr);
+ } else {
+ status = TXGBE_ERR_SWFW_SYNC;
+ }
+
+ return status;
+}
+
+MTD_STATUS txgbe_read_mdio(
+ MTD_DEV * dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 *value)
+{
+ struct txgbe_hw *hw = (struct txgbe_hw *)(dev->appData);
+
+ if (hw->phy.addr != port)
+ return MTD_FAIL;
+ return txgbe_read_phy_reg(hw, reg, mmd, value);
+}
+
+MTD_STATUS txgbe_write_mdio(
+ MTD_DEV * dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 value)
+{
+ struct txgbe_hw *hw = (struct txgbe_hw *)(dev->appData);
+
+ if (hw->phy.addr != port)
+ return MTD_FAIL;
+
+ return txgbe_write_phy_reg(hw, reg, mmd, value);
+}
+
+/**
+ * txgbe_setup_phy_link - Set and restart auto-neg
+ * @hw: pointer to hardware structure
+ *
+ * Restart auto-negotiation and PHY and waits for completion.
+ **/
+u32 txgbe_setup_phy_link(struct txgbe_hw *hw, u32 speed_set, bool autoneg_wait_to_complete)
+{
+ u16 speed = MTD_ADV_NONE;
+ MTD_DEV_PTR devptr = &hw->phy_dev;
+ MTD_BOOL anDone = MTD_FALSE;
+ u16 port = hw->phy.addr;
+
+ UNREFERENCED_PARAMETER(speed_set);
+ DEBUGFUNC("\n");
+
+ if (!autoneg_wait_to_complete) {
+ mtdAutonegIsSpeedDuplexResolutionDone(devptr, port, &anDone);
+ if (anDone) {
+ mtdGetAutonegSpeedDuplexResolution(devptr, port, &speed);
+ }
+ } else {
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL)
+ speed |= MTD_SPEED_10GIG_FD;
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL)
+ speed |= MTD_SPEED_1GIG_FD;
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100_FULL)
+ speed |= MTD_SPEED_100M_FD;
+ if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10_FULL)
+ speed |= MTD_SPEED_10M_FD;
+ mtdEnableSpeeds(devptr, port, speed, MTD_TRUE);
+
+ /* wait autoneg to be done */
+ speed = MTD_ADV_NONE;
+ }
+
+ switch (speed) {
+ case MTD_SPEED_10GIG_FD:
+ return TXGBE_LINK_SPEED_10GB_FULL;
+ case MTD_SPEED_1GIG_FD:
+ return TXGBE_LINK_SPEED_1GB_FULL;
+ case MTD_SPEED_100M_FD:
+ return TXGBE_LINK_SPEED_100_FULL;
+ case MTD_SPEED_10M_FD:
+ return TXGBE_LINK_SPEED_10_FULL;
+ default:
+ return TXGBE_LINK_SPEED_UNKNOWN;
+ }
+
+}
+
+/**
+ * txgbe_setup_phy_link_speed - Sets the auto advertised capabilities
+ * @hw: pointer to hardware structure
+ * @speed: new link speed
+ **/
+u32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete)
+{
+
+ DEBUGFUNC("\n");
+
+ /*
+ * Clear autoneg_advertised and set new values based on input link
+ * speed.
+ */
+ hw->phy.autoneg_advertised = 0;
+
+ if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+ if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+ if (speed & TXGBE_LINK_SPEED_100_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_100_FULL;
+
+ if (speed & TXGBE_LINK_SPEED_10_FULL)
+ hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10_FULL;
+
+ /* Setup link based on the new speed settings */
+ return txgbe_setup_phy_link(hw, speed, autoneg_wait_to_complete);
+}
+
+/**
+ * txgbe_get_copper_link_capabilities - Determines link capabilities
+ * @hw: pointer to hardware structure
+ * @speed: pointer to link speed
+ * @autoneg: boolean auto-negotiation value
+ *
+ * Determines the supported link capabilities by reading the PHY auto
+ * negotiation register.
+ **/
+s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw,
+ u32 *speed,
+ bool *autoneg)
+{
+ s32 status;
+ u16 speed_ability;
+
+ DEBUGFUNC("\n");
+
+ *speed = 0;
+ *autoneg = true;
+
+ status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr,
+ TXGBE_MDIO_PMA_PMD_DEV_TYPE,
+ TXGBE_MDIO_PHY_SPEED_ABILITY, &speed_ability);
+
+ if (status == 0) {
+ if (speed_ability & TXGBE_MDIO_PHY_SPEED_10G)
+ *speed |= TXGBE_LINK_SPEED_10GB_FULL;
+ if (speed_ability & TXGBE_MDIO_PHY_SPEED_1G)
+ *speed |= TXGBE_LINK_SPEED_1GB_FULL;
+ if (speed_ability & TXGBE_MDIO_PHY_SPEED_100M)
+ *speed |= TXGBE_LINK_SPEED_100_FULL;
+ if (speed_ability & TXGBE_MDIO_PHY_SPEED_10M)
+ *speed |= TXGBE_LINK_SPEED_10_FULL;
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_identify_module - Identifies module type
+ * @hw: pointer to hardware structure
+ *
+ * Determines HW type and calls appropriate function.
+ **/
+s32 txgbe_identify_module(struct txgbe_hw *hw)
+{
+ s32 status = TXGBE_ERR_SFP_NOT_PRESENT;
+
+ DEBUGFUNC("\n");
+
+ switch (TCALL(hw, mac.ops.get_media_type)) {
+ case txgbe_media_type_fiber:
+ status = txgbe_identify_sfp_module(hw);
+ break;
+
+ default:
+ hw->phy.sfp_type = txgbe_sfp_type_not_present;
+ status = TXGBE_ERR_SFP_NOT_PRESENT;
+ break;
+ }
+
+ return status;
+}
+
+/**
+ * txgbe_identify_sfp_module - Identifies SFP modules
+ * @hw: pointer to hardware structure
+ *
+ * Searches for and identifies the SFP module and assigns appropriate PHY type.
+ **/
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw)
+{
+ s32 status = TXGBE_ERR_PHY_ADDR_INVALID;
+ u32 vendor_oui = 0;
+ enum txgbe_sfp_type stored_sfp_type = hw->phy.sfp_type;
+ u8 identifier = 0;
+ u8 comp_codes_1g = 0;
+ u8 comp_codes_10g = 0;
+ u8 oui_bytes[3] = {0, 0, 0};
+ u8 cable_tech = 0;
+ u8 cable_spec = 0;
+
+ DEBUGFUNC("\n");
+
+ if (TCALL(hw, mac.ops.get_media_type) != txgbe_media_type_fiber) {
+ hw->phy.sfp_type = txgbe_sfp_type_not_present;
+ status = TXGBE_ERR_SFP_NOT_PRESENT;
+ goto out;
+ }
+
+ /* LAN ID is needed for I2C access */
+ txgbe_init_i2c(hw);
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_IDENTIFIER,
+ &identifier);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ if (identifier != TXGBE_SFF_IDENTIFIER_SFP) {
+ hw->phy.type = txgbe_phy_sfp_unsupported;
+ status = TXGBE_ERR_SFP_NOT_SUPPORTED;
+ } else {
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_1GBE_COMP_CODES,
+ &comp_codes_1g);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_10GBE_COMP_CODES,
+ &comp_codes_10g);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_CABLE_TECHNOLOGY,
+ &cable_tech);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ /* ID Module
+ * =========
+ * 0 SFP_DA_CU
+ * 1 SFP_SR
+ * 2 SFP_LR
+ * 3 SFP_DA_CORE0
+ * 4 SFP_DA_CORE1
+ * 5 SFP_SR/LR_CORE0
+ * 6 SFP_SR/LR_CORE1
+ * 7 SFP_act_lmt_DA_CORE0
+ * 8 SFP_act_lmt_DA_CORE1
+ * 9 SFP_1g_cu_CORE0
+ * 10 SFP_1g_cu_CORE1
+ * 11 SFP_1g_sx_CORE0
+ * 12 SFP_1g_sx_CORE1
+ */
+ {
+ if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_da_cu_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_da_cu_core1;
+ } else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE) {
+ TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_CABLE_SPEC_COMP,
+ &cable_spec);
+ if (cable_spec &
+ TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_da_act_lmt_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_da_act_lmt_core1;
+ } else {
+ hw->phy.sfp_type =
+ txgbe_sfp_type_unknown;
+ }
+ } else if (comp_codes_10g &
+ (TXGBE_SFF_10GBASESR_CAPABLE |
+ TXGBE_SFF_10GBASELR_CAPABLE)) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_srlr_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_srlr_core1;
+ } else if (comp_codes_1g & TXGBE_SFF_1GBASET_CAPABLE) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_cu_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_cu_core1;
+ } else if (comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_sx_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_sx_core1;
+ } else if (comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) {
+ if (hw->bus.lan_id == 0)
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_lx_core0;
+ else
+ hw->phy.sfp_type =
+ txgbe_sfp_type_1g_lx_core1;
+ } else {
+ hw->phy.sfp_type = txgbe_sfp_type_unknown;
+ }
+ }
+
+ if (hw->phy.sfp_type != stored_sfp_type)
+ hw->phy.sfp_setup_needed = true;
+
+ /* Determine if the SFP+ PHY is dual speed or not. */
+ hw->phy.multispeed_fiber = false;
+ if (((comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) &&
+ (comp_codes_10g & TXGBE_SFF_10GBASESR_CAPABLE)) ||
+ ((comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) &&
+ (comp_codes_10g & TXGBE_SFF_10GBASELR_CAPABLE)))
+ hw->phy.multispeed_fiber = true;
+
+ /* Determine PHY vendor */
+ if (hw->phy.type != txgbe_phy_nl) {
+ hw->phy.id = identifier;
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_VENDOR_OUI_BYTE0,
+ &oui_bytes[0]);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_VENDOR_OUI_BYTE1,
+ &oui_bytes[1]);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ status = TCALL(hw, phy.ops.read_i2c_eeprom,
+ TXGBE_SFF_VENDOR_OUI_BYTE2,
+ &oui_bytes[2]);
+
+ if (status != 0)
+ goto err_read_i2c_eeprom;
+
+ vendor_oui =
+ ((oui_bytes[0] << TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT) |
+ (oui_bytes[1] << TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT) |
+ (oui_bytes[2] << TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT));
+
+ switch (vendor_oui) {
+ case TXGBE_SFF_VENDOR_OUI_TYCO:
+ if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE)
+ hw->phy.type =
+ txgbe_phy_sfp_passive_tyco;
+ break;
+ case TXGBE_SFF_VENDOR_OUI_FTL:
+ if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE)
+ hw->phy.type = txgbe_phy_sfp_ftl_active;
+ else
+ hw->phy.type = txgbe_phy_sfp_ftl;
+ break;
+ case TXGBE_SFF_VENDOR_OUI_AVAGO:
+ hw->phy.type = txgbe_phy_sfp_avago;
+ break;
+ case TXGBE_SFF_VENDOR_OUI_INTEL:
+ hw->phy.type = txgbe_phy_sfp_intel;
+ break;
+ default:
+ if (cable_tech & TXGBE_SFF_DA_PASSIVE_CABLE)
+ hw->phy.type =
+ txgbe_phy_sfp_passive_unknown;
+ else if (cable_tech & TXGBE_SFF_DA_ACTIVE_CABLE)
+ hw->phy.type =
+ txgbe_phy_sfp_active_unknown;
+ else
+ hw->phy.type = txgbe_phy_sfp_unknown;
+ break;
+ }
+ }
+
+ /* Allow any DA cable vendor */
+ if (cable_tech & (TXGBE_SFF_DA_PASSIVE_CABLE |
+ TXGBE_SFF_DA_ACTIVE_CABLE)) {
+ status = 0;
+ goto out;
+ }
+
+ /* Verify supported 1G SFP modules */
+ if (comp_codes_10g == 0 &&
+ !(hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+ hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1)) {
+ hw->phy.type = txgbe_phy_sfp_unsupported;
+ status = TXGBE_ERR_SFP_NOT_SUPPORTED;
+ goto out;
+ }
+ }
+
+out:
+ return status;
+
+err_read_i2c_eeprom:
+ hw->phy.sfp_type = txgbe_sfp_type_not_present;
+ if (hw->phy.type != txgbe_phy_nl) {
+ hw->phy.id = 0;
+ hw->phy.type = txgbe_phy_unknown;
+ }
+ return TXGBE_ERR_SFP_NOT_PRESENT;
+}
+
+s32 txgbe_init_i2c(struct txgbe_hw *hw)
+{
+
+ wr32(hw, TXGBE_I2C_ENABLE, 0);
+
+ wr32(hw, TXGBE_I2C_CON,
+ (TXGBE_I2C_CON_MASTER_MODE |
+ TXGBE_I2C_CON_SPEED(1) |
+ TXGBE_I2C_CON_RESTART_EN |
+ TXGBE_I2C_CON_SLAVE_DISABLE));
+ /* Default addr is 0xA0 ,bit 0 is configure for read/write! */
+ wr32(hw, TXGBE_I2C_TAR, TXGBE_I2C_SLAVE_ADDR);
+ wr32(hw, TXGBE_I2C_SS_SCL_HCNT, 600);
+ wr32(hw, TXGBE_I2C_SS_SCL_LCNT, 600);
+ wr32(hw, TXGBE_I2C_RX_TL, 0); /* 1byte for rx full signal */
+ wr32(hw, TXGBE_I2C_TX_TL, 4);
+ wr32(hw, TXGBE_I2C_SCL_STUCK_TIMEOUT, 0xFFFFFF);
+ wr32(hw, TXGBE_I2C_SDA_STUCK_TIMEOUT, 0xFFFFFF);
+
+ wr32(hw, TXGBE_I2C_INTR_MASK, 0);
+ wr32(hw, TXGBE_I2C_ENABLE, 1);
+ return 0;
+}
+
+s32 txgbe_clear_i2c(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+
+ /* wait for completion */
+ status = po32m(hw, TXGBE_I2C_STATUS,
+ TXGBE_I2C_STATUS_MST_ACTIVITY, ~TXGBE_I2C_STATUS_MST_ACTIVITY,
+ TXGBE_I2C_TIMEOUT, 10);
+ if (status != 0)
+ goto out;
+
+ wr32(hw, TXGBE_I2C_ENABLE, 0);
+
+out:
+ return status;
+}
+
+/**
+ * txgbe_read_i2c_eeprom - Reads 8 bit EEPROM word over I2C interface
+ * @hw: pointer to hardware structure
+ * @byte_offset: EEPROM byte offset to read
+ * @eeprom_data: value read
+ *
+ * Performs byte read operation to SFP module's EEPROM over I2C interface.
+ **/
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+ u8 *eeprom_data)
+{
+ DEBUGFUNC("\n");
+
+ return TCALL(hw, phy.ops.read_i2c_byte, byte_offset,
+ TXGBE_I2C_EEPROM_DEV_ADDR,
+ eeprom_data);
+}
+
+/**
+ * txgbe_read_i2c_sff8472 - Reads 8 bit word over I2C interface
+ * @hw: pointer to hardware structure
+ * @byte_offset: byte offset at address 0xA2
+ * @eeprom_data: value read
+ *
+ * Performs byte read operation to SFP module's SFF-8472 data over I2C
+ **/
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+ u8 *sff8472_data)
+{
+ return TCALL(hw, phy.ops.read_i2c_byte, byte_offset,
+ TXGBE_I2C_EEPROM_DEV_ADDR2,
+ sff8472_data);
+}
+
+/**
+ * txgbe_write_i2c_eeprom - Writes 8 bit EEPROM word over I2C interface
+ * @hw: pointer to hardware structure
+ * @byte_offset: EEPROM byte offset to write
+ * @eeprom_data: value to write
+ *
+ * Performs byte write operation to SFP module's EEPROM over I2C interface.
+ **/
+s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+ u8 eeprom_data)
+{
+ DEBUGFUNC("\n");
+
+ return TCALL(hw, phy.ops.write_i2c_byte, byte_offset,
+ TXGBE_I2C_EEPROM_DEV_ADDR,
+ eeprom_data);
+}
+
+/**
+ * txgbe_read_i2c_byte_int - Reads 8 bit word over I2C
+ * @hw: pointer to hardware structure
+ * @byte_offset: byte offset to read
+ * @data: value read
+ * @lock: true if to take and release semaphore
+ *
+ * Performs byte read operation to SFP module's EEPROM over I2C interface at
+ * a specified device address.
+ **/
+STATIC s32 txgbe_read_i2c_byte_int(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 *data, bool lock)
+{
+ s32 status = 0;
+ u32 swfw_mask = hw->phy.phy_semaphore_mask;
+
+ UNREFERENCED_PARAMETER(dev_addr);
+
+ if (lock && 0 != TCALL(hw, mac.ops.acquire_swfw_sync, swfw_mask))
+ return TXGBE_ERR_SWFW_SYNC;
+
+ /* wait tx empty */
+ status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+ TXGBE_I2C_INTR_STAT_TX_EMPTY, TXGBE_I2C_INTR_STAT_TX_EMPTY,
+ TXGBE_I2C_TIMEOUT, 10);
+ if (status != 0)
+ goto out;
+
+ /* read data */
+ wr32(hw, TXGBE_I2C_DATA_CMD,
+ byte_offset | TXGBE_I2C_DATA_CMD_STOP);
+ wr32(hw, TXGBE_I2C_DATA_CMD, TXGBE_I2C_DATA_CMD_READ);
+
+ /* wait for read complete */
+ status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+ TXGBE_I2C_INTR_STAT_RX_FULL, TXGBE_I2C_INTR_STAT_RX_FULL,
+ TXGBE_I2C_TIMEOUT, 10);
+ if (status != 0)
+ goto out;
+
+ *data = 0xFF & rd32(hw, TXGBE_I2C_DATA_CMD);
+
+out:
+ if (lock)
+ TCALL(hw, mac.ops.release_swfw_sync, swfw_mask);
+ return status;
+}
+
+/**
+ * txgbe_switch_i2c_slave_addr - Switch I2C slave address
+ * @hw: pointer to hardware structure
+ * @dev_addr: slave addr to switch
+ *
+ **/
+s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr)
+{
+ wr32(hw, TXGBE_I2C_ENABLE, 0);
+ wr32(hw, TXGBE_I2C_TAR, dev_addr >> 1);
+ wr32(hw, TXGBE_I2C_ENABLE, 1);
+ return 0;
+}
+
+
+/**
+ * txgbe_read_i2c_byte - Reads 8 bit word over I2C
+ * @hw: pointer to hardware structure
+ * @byte_offset: byte offset to read
+ * @data: value read
+ *
+ * Performs byte read operation to SFP module's EEPROM over I2C interface at
+ * a specified device address.
+ **/
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 *data)
+{
+ txgbe_switch_i2c_slave_addr(hw, dev_addr);
+
+ return txgbe_read_i2c_byte_int(hw, byte_offset, dev_addr,
+ data, true);
+}
+
+/**
+ * txgbe_write_i2c_byte_int - Writes 8 bit word over I2C
+ * @hw: pointer to hardware structure
+ * @byte_offset: byte offset to write
+ * @data: value to write
+ * @lock: true if to take and release semaphore
+ *
+ * Performs byte write operation to SFP module's EEPROM over I2C interface at
+ * a specified device address.
+ **/
+STATIC s32 txgbe_write_i2c_byte_int(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 data, bool lock)
+{
+ s32 status = 0;
+ u32 swfw_mask = hw->phy.phy_semaphore_mask;
+
+ UNREFERENCED_PARAMETER(dev_addr);
+
+ if (lock && 0 != TCALL(hw, mac.ops.acquire_swfw_sync, swfw_mask))
+ return TXGBE_ERR_SWFW_SYNC;
+
+ /* wait tx empty */
+ status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+ TXGBE_I2C_INTR_STAT_TX_EMPTY, TXGBE_I2C_INTR_STAT_TX_EMPTY,
+ TXGBE_I2C_TIMEOUT, 10);
+ if (status != 0)
+ goto out;
+
+ wr32(hw, TXGBE_I2C_DATA_CMD,
+ byte_offset | TXGBE_I2C_DATA_CMD_STOP);
+ wr32(hw, TXGBE_I2C_DATA_CMD,
+ data | TXGBE_I2C_DATA_CMD_WRITE);
+
+ /* wait for write complete */
+ status = po32m(hw, TXGBE_I2C_RAW_INTR_STAT,
+ TXGBE_I2C_INTR_STAT_RX_FULL, TXGBE_I2C_INTR_STAT_RX_FULL,
+ TXGBE_I2C_TIMEOUT, 10);
+
+out:
+ if (lock)
+ TCALL(hw, mac.ops.release_swfw_sync, swfw_mask);
+
+ return status;
+}
+
+/**
+ * txgbe_write_i2c_byte - Writes 8 bit word over I2C
+ * @hw: pointer to hardware structure
+ * @byte_offset: byte offset to write
+ * @data: value to write
+ *
+ * Performs byte write operation to SFP module's EEPROM over I2C interface at
+ * a specified device address.
+ **/
+s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 data)
+{
+ return txgbe_write_i2c_byte_int(hw, byte_offset, dev_addr,
+ data, true);
+}
+
+/**
+ * txgbe_tn_check_overtemp - Checks if an overtemp occurred.
+ * @hw: pointer to hardware structure
+ *
+ * Checks if the LASI temp alarm status was triggered due to overtemp
+ **/
+s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+ u32 ts_state;
+
+ DEBUGFUNC("\n");
+
+ /* Check that the LASI temp alarm status was triggered */
+ ts_state = rd32(hw, TXGBE_TS_ALARM_ST);
+
+ if (ts_state & TXGBE_TS_ALARM_ST_DALARM)
+ status = TXGBE_ERR_UNDERTEMP;
+ else if (ts_state & TXGBE_TS_ALARM_ST_ALARM)
+ status = TXGBE_ERR_OVERTEMP;
+
+ return status;
+}
+
+
+s32 txgbe_init_external_phy(struct txgbe_hw *hw)
+{
+ s32 status = 0;
+
+ MTD_DEV_PTR devptr = &(hw->phy_dev);
+
+ hw->phy.addr = 0;
+
+ devptr->appData = hw;
+ status = mtdLoadDriver(txgbe_read_mdio,
+ txgbe_write_mdio,
+ MTD_FALSE,
+ NULL,
+ NULL,
+ NULL,
+ NULL,
+ hw->phy.addr,
+ devptr);
+ if (status != 0) {
+ ERROR_REPORT1(TXGBE_ERROR_INVALID_STATE,
+ "External PHY initilization failed.\n");
+ return TXGBE_ERR_PHY;
+ }
+
+ return status;
+}
+
+s32 txgbe_set_phy_pause_advertisement(struct txgbe_hw *hw, u32 pause_bit)
+{
+ return mtdSetPauseAdvertisement(&hw->phy_dev, hw->phy.addr,
+ (pause_bit>>10)&0x3, MTD_FALSE);
+}
+
+s32 txgbe_get_phy_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ status = mtdHwXmdioRead(&hw->phy_dev, hw->phy.addr,
+ TXGBE_MDIO_AUTO_NEG_DEV_TYPE,
+ TXGBE_MDIO_AUTO_NEG_ADVT, &value);
+ *pause_bit = (u8)((value>>10)&0x3);
+ return status;
+
+}
+
+s32 txgbe_get_lp_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit)
+{
+ return mtdGetLPAdvertisedPause(&hw->phy_dev, hw->phy.addr, pause_bit);
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
new file mode 100644
index 0000000000000..f033b43cf4fe0
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_phy.h
@@ -0,0 +1,190 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_phy.h, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#ifndef _TXGBE_PHY_H_
+#define _TXGBE_PHY_H_
+
+#include "txgbe.h"
+
+#define TXGBE_I2C_EEPROM_DEV_ADDR 0xA0
+#define TXGBE_I2C_EEPROM_DEV_ADDR2 0xA2
+#define TXGBE_I2C_EEPROM_BANK_LEN 0xFF
+
+/* EEPROM byte offsets */
+#define TXGBE_SFF_IDENTIFIER 0x0
+#define TXGBE_SFF_IDENTIFIER_SFP 0x3
+#define TXGBE_SFF_VENDOR_OUI_BYTE0 0x25
+#define TXGBE_SFF_VENDOR_OUI_BYTE1 0x26
+#define TXGBE_SFF_VENDOR_OUI_BYTE2 0x27
+#define TXGBE_SFF_1GBE_COMP_CODES 0x6
+#define TXGBE_SFF_10GBE_COMP_CODES 0x3
+#define TXGBE_SFF_CABLE_TECHNOLOGY 0x8
+#define TXGBE_SFF_CABLE_SPEC_COMP 0x3C
+#define TXGBE_SFF_SFF_8472_SWAP 0x5C
+#define TXGBE_SFF_SFF_8472_COMP 0x5E
+#define TXGBE_SFF_SFF_8472_OSCB 0x6E
+#define TXGBE_SFF_SFF_8472_ESCB 0x76
+#define TXGBE_SFF_IDENTIFIER_QSFP_PLUS 0xD
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE0 0xA5
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE1 0xA6
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE2 0xA7
+#define TXGBE_SFF_QSFP_CONNECTOR 0x82
+#define TXGBE_SFF_QSFP_10GBE_COMP 0x83
+#define TXGBE_SFF_QSFP_1GBE_COMP 0x86
+#define TXGBE_SFF_QSFP_CABLE_LENGTH 0x92
+#define TXGBE_SFF_QSFP_DEVICE_TECH 0x93
+
+/* Bitmasks */
+#define TXGBE_SFF_DA_PASSIVE_CABLE 0x4
+#define TXGBE_SFF_DA_ACTIVE_CABLE 0x8
+#define TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING 0x4
+#define TXGBE_SFF_1GBASESX_CAPABLE 0x1
+#define TXGBE_SFF_1GBASELX_CAPABLE 0x2
+#define TXGBE_SFF_1GBASET_CAPABLE 0x8
+#define TXGBE_SFF_10GBASESR_CAPABLE 0x10
+#define TXGBE_SFF_10GBASELR_CAPABLE 0x20
+#define TXGBE_SFF_SOFT_RS_SELECT_MASK 0x8
+#define TXGBE_SFF_SOFT_RS_SELECT_10G 0x8
+#define TXGBE_SFF_SOFT_RS_SELECT_1G 0x0
+#define TXGBE_SFF_ADDRESSING_MODE 0x4
+#define TXGBE_SFF_QSFP_DA_ACTIVE_CABLE 0x1
+#define TXGBE_SFF_QSFP_DA_PASSIVE_CABLE 0x8
+#define TXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE 0x23
+#define TXGBE_SFF_QSFP_TRANSMITER_850NM_VCSEL 0x0
+#define TXGBE_I2C_EEPROM_READ_MASK 0x100
+#define TXGBE_I2C_EEPROM_STATUS_MASK 0x3
+#define TXGBE_I2C_EEPROM_STATUS_NO_OPERATION 0x0
+#define TXGBE_I2C_EEPROM_STATUS_PASS 0x1
+#define TXGBE_I2C_EEPROM_STATUS_FAIL 0x2
+#define TXGBE_I2C_EEPROM_STATUS_IN_PROGRESS 0x3
+
+#define TXGBE_CS4227 0xBE /* CS4227 address */
+#define TXGBE_CS4227_GLOBAL_ID_LSB 0
+#define TXGBE_CS4227_SCRATCH 2
+#define TXGBE_CS4227_GLOBAL_ID_VALUE 0x03E5
+#define TXGBE_CS4227_SCRATCH_VALUE 0x5aa5
+#define TXGBE_CS4227_RETRIES 5
+#define TXGBE_CS4227_LINE_SPARE22_MSB 0x12AD /* Reg to program speed */
+#define TXGBE_CS4227_LINE_SPARE24_LSB 0x12B0 /* Reg to program EDC */
+#define TXGBE_CS4227_HOST_SPARE22_MSB 0x1AAD /* Reg to program speed */
+#define TXGBE_CS4227_HOST_SPARE24_LSB 0x1AB0 /* Reg to program EDC */
+#define TXGBE_CS4227_EDC_MODE_CX1 0x0002
+#define TXGBE_CS4227_EDC_MODE_SR 0x0004
+#define TXGBE_CS4227_RESET_HOLD 500 /* microseconds */
+#define TXGBE_CS4227_RESET_DELAY 500 /* milliseconds */
+#define TXGBE_CS4227_CHECK_DELAY 30 /* milliseconds */
+#define TXGBE_PE 0xE0 /* Port expander address */
+#define TXGBE_PE_OUTPUT 1 /* Output register offset */
+#define TXGBE_PE_CONFIG 3 /* Config register offset */
+#define TXGBE_PE_BIT1 (1 << 1)
+
+/* Flow control defines */
+#define TXGBE_TAF_SYM_PAUSE (0x1)
+#define TXGBE_TAF_ASM_PAUSE (0x2)
+
+/* Bit-shift macros */
+#define TXGBE_SFF_VENDOR_OUI_BYTE0_SHIFT 24
+#define TXGBE_SFF_VENDOR_OUI_BYTE1_SHIFT 16
+#define TXGBE_SFF_VENDOR_OUI_BYTE2_SHIFT 8
+
+/* Vendor OUIs: format of OUI is 0x[byte0][byte1][byte2][00] */
+#define TXGBE_SFF_VENDOR_OUI_TYCO 0x00407600
+#define TXGBE_SFF_VENDOR_OUI_FTL 0x00906500
+#define TXGBE_SFF_VENDOR_OUI_AVAGO 0x00176A00
+#define TXGBE_SFF_VENDOR_OUI_INTEL 0x001B2100
+
+/* I2C SDA and SCL timing parameters for standard mode */
+#define TXGBE_I2C_T_HD_STA 4
+#define TXGBE_I2C_T_LOW 5
+#define TXGBE_I2C_T_HIGH 4
+#define TXGBE_I2C_T_SU_STA 5
+#define TXGBE_I2C_T_HD_DATA 5
+#define TXGBE_I2C_T_SU_DATA 1
+#define TXGBE_I2C_T_RISE 1
+#define TXGBE_I2C_T_FALL 1
+#define TXGBE_I2C_T_SU_STO 4
+#define TXGBE_I2C_T_BUF 5
+
+/* SFP+ SFF-8472 Compliance */
+#define TXGBE_SFF_SFF_8472_UNSUP 0x00
+
+
+enum txgbe_phy_type txgbe_get_phy_type_from_id(struct txgbe_hw *hw);
+s32 txgbe_get_phy_id(struct txgbe_hw *hw);
+s32 txgbe_reset_phy(struct txgbe_hw *hw);
+s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data);
+s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 phy_data);
+s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data);
+s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data);
+u32 txgbe_setup_phy_link(struct txgbe_hw *hw, u32 speed_set, bool autoneg_wait_to_complete);
+u32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete);
+s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw,
+ u32 *speed,
+ bool *autoneg);
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
+
+s32 txgbe_identify_module(struct txgbe_hw *hw);
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
+s32 txgbe_tn_check_overtemp(struct txgbe_hw *hw);
+s32 txgbe_init_i2c(struct txgbe_hw *hw);
+s32 txgbe_clear_i2c(struct txgbe_hw *hw);
+s32 txgbe_switch_i2c_slave_addr(struct txgbe_hw *hw, u8 dev_addr);
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 *data);
+
+s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+ u8 dev_addr, u8 data);
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+ u8 *eeprom_data);
+s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+ u8 eeprom_data);
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+ u8 *sff8472_data);
+s32 txgbe_init_external_phy(struct txgbe_hw *hw);
+s32 txgbe_set_phy_pause_advertisement(struct txgbe_hw *hw, u32 pause_bit);
+s32 txgbe_get_phy_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit);
+s32 txgbe_get_lp_advertised_pause(struct txgbe_hw *hw, u8 *pause_bit);
+
+MTD_STATUS txgbe_read_mdio(
+ MTD_DEV * dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 *value);
+
+MTD_STATUS txgbe_write_mdio(
+ MTD_DEV * dev,
+ MTD_U16 port,
+ MTD_U16 mmd,
+ MTD_U16 reg,
+ MTD_U16 value);
+
+
+#endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c b/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
new file mode 100644
index 0000000000000..4a614a550e47a
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_ptp.c
@@ -0,0 +1,884 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_ptp.c, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#include "txgbe.h"
+#include <linux/ptp_classify.h>
+
+/*
+ * SYSTIME is defined by a fixed point system which allows the user to
+ * define the scale counter increment value at every level change of
+ * the oscillator driving SYSTIME value. The time unit is determined by
+ * the clock frequency of the oscillator and TIMINCA register.
+ * The cyclecounter and timecounter structures are used to to convert
+ * the scale counter into nanoseconds. SYSTIME registers need to be converted
+ * to ns values by use of only a right shift.
+ * The following math determines the largest incvalue that will fit into
+ * the available bits in the TIMINCA register:
+ * Period * [ 2 ^ ( MaxWidth - PeriodWidth ) ]
+ * PeriodWidth: Number of bits to store the clock period
+ * MaxWidth: The maximum width value of the TIMINCA register
+ * Period: The clock period for the oscillator, which changes based on the link
+ * speed:
+ * At 10Gb link or no link, the period is 6.4 ns.
+ * At 1Gb link, the period is multiplied by 10. (64ns)
+ * At 100Mb link, the period is multiplied by 100. (640ns)
+ * round(): discard the fractional portion of the calculation
+ *
+ * The calculated value allows us to right shift the SYSTIME register
+ * value in order to quickly convert it into a nanosecond clock,
+ * while allowing for the maximum possible adjustment value.
+ *
+ * LinkSpeed ClockFreq ClockPeriod TIMINCA:IV
+ * 10000Mbps 156.25MHz 6.4*10^-9 0xCCCCCC(0xFFFFF/ns)
+ * 1000 Mbps 62.5 MHz 16 *10^-9 0x800000(0x7FFFF/ns)
+ * 100 Mbps 6.25 MHz 160*10^-9 0xA00000(0xFFFF/ns)
+ * 10 Mbps 0.625 MHz 1600*10^-9 0xC7F380(0xFFF/ns)
+ * FPGA 31.25 MHz 32 *10^-9 0x800000(0x3FFFF/ns)
+ *
+ * These diagrams are only for the 10Gb link period
+ *
+ * +--------------+ +--------------+
+ * | 32 | | 8 | 3 | 20 |
+ * *--------------+ +--------------+
+ * \________ 43 bits ______/ fract
+ *
+ * The 43 bit SYSTIME overflows every
+ * 2^43 * 10^-9 / 3600 = 2.4 hours
+ */
+#define TXGBE_INCVAL_10GB 0xCCCCCC
+#define TXGBE_INCVAL_1GB 0x800000
+#define TXGBE_INCVAL_100 0xA00000
+#define TXGBE_INCVAL_10 0xC7F380
+#define TXGBE_INCVAL_FPGA 0x800000
+
+#define TXGBE_INCVAL_SHIFT_10GB 20
+#define TXGBE_INCVAL_SHIFT_1GB 18
+#define TXGBE_INCVAL_SHIFT_100 15
+#define TXGBE_INCVAL_SHIFT_10 12
+#define TXGBE_INCVAL_SHIFT_FPGA 17
+
+#define TXGBE_OVERFLOW_PERIOD (HZ * 30)
+#define TXGBE_PTP_TX_TIMEOUT (HZ)
+
+/**
+ * txgbe_ptp_read - read raw cycle counter (to be used by time counter)
+ * @hw_cc: the cyclecounter structure
+ *
+ * this function reads the cyclecounter registers and is called by the
+ * cyclecounter structure used to construct a ns counter from the
+ * arbitrary fixed point registers
+ */
+static u64 txgbe_ptp_read(const struct cyclecounter *hw_cc)
+{
+ struct txgbe_adapter *adapter =
+ container_of(hw_cc, struct txgbe_adapter, hw_cc);
+ struct txgbe_hw *hw = &adapter->hw;
+ u64 stamp = 0;
+
+ stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIML);
+ stamp |= (u64)rd32(hw, TXGBE_TSC_1588_SYSTIMH) << 32;
+
+ return stamp;
+}
+
+/**
+ * txgbe_ptp_convert_to_hwtstamp - convert register value to hw timestamp
+ * @adapter: private adapter structure
+ * @hwtstamp: stack timestamp structure
+ * @systim: unsigned 64bit system time value
+ *
+ * We need to convert the adapter's RX/TXSTMP registers into a hwtstamp value
+ * which can be used by the stack's ptp functions.
+ *
+ * The lock is used to protect consistency of the cyclecounter and the SYSTIME
+ * registers. However, it does not need to protect against the Rx or Tx
+ * timestamp registers, as there can't be a new timestamp until the old one is
+ * unlatched by reading.
+ *
+ * In addition to the timestamp in hardware, some controllers need a software
+ * overflow cyclecounter, and this function takes this into account as well.
+ **/
+static void txgbe_ptp_convert_to_hwtstamp(struct txgbe_adapter *adapter,
+ struct skb_shared_hwtstamps *hwtstamp,
+ u64 timestamp)
+{
+ unsigned long flags;
+ u64 ns;
+
+ memset(hwtstamp, 0, sizeof(*hwtstamp));
+
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ ns = timecounter_cyc2time(&adapter->hw_tc, timestamp);
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ hwtstamp->hwtstamp = ns_to_ktime(ns);
+}
+
+/**
+ * txgbe_ptp_adjfreq
+ * @ptp: the ptp clock structure
+ * @ppb: parts per billion adjustment from base
+ *
+ * adjust the frequency of the ptp cycle counter by the
+ * indicated ppb from the base frequency.
+ */
+static int txgbe_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+{
+ struct txgbe_adapter *adapter =
+ container_of(ptp, struct txgbe_adapter, ptp_caps);
+ struct txgbe_hw *hw = &adapter->hw;
+ u64 freq, incval;
+ u32 diff;
+ int neg_adj = 0;
+
+ if (ppb < 0) {
+ neg_adj = 1;
+ ppb = -ppb;
+ }
+
+ smp_mb();
+ incval = READ_ONCE(adapter->base_incval);
+
+ freq = incval;
+ freq *= ppb;
+ diff = div_u64(freq, 1000000000ULL);
+
+ incval = neg_adj ? (incval - diff) : (incval + diff);
+
+ if (incval > TXGBE_TSC_1588_INC_IV(~0))
+ e_dev_warn("PTP ppb adjusted SYSTIME rate overflowed!\n");
+ wr32(hw, TXGBE_TSC_1588_INC,
+ TXGBE_TSC_1588_INC_IVP(incval, 2));
+
+ return 0;
+}
+
+
+/**
+ * txgbe_ptp_adjtime
+ * @ptp: the ptp clock structure
+ * @delta: offset to adjust the cycle counter by ns
+ *
+ * adjust the timer by resetting the timecounter structure.
+ */
+static int txgbe_ptp_adjtime(struct ptp_clock_info *ptp,
+ s64 delta)
+{
+ struct txgbe_adapter *adapter =
+ container_of(ptp, struct txgbe_adapter, ptp_caps);
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ timecounter_adjtime(&adapter->hw_tc, delta);
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ return 0;
+}
+
+/**
+ * txgbe_ptp_gettime64
+ * @ptp: the ptp clock structure
+ * @ts: timespec64 structure to hold the current time value
+ *
+ * read the timecounter and return the correct value on ns,
+ * after converting it into a struct timespec64.
+ */
+static int txgbe_ptp_gettime64(struct ptp_clock_info *ptp,
+ struct timespec64 *ts)
+{
+ struct txgbe_adapter *adapter =
+ container_of(ptp, struct txgbe_adapter, ptp_caps);
+ unsigned long flags;
+ u64 ns;
+
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ ns = timecounter_read(&adapter->hw_tc);
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ *ts = ns_to_timespec64(ns);
+
+ return 0;
+}
+
+/**
+ * txgbe_ptp_settime64
+ * @ptp: the ptp clock structure
+ * @ts: the timespec64 containing the new time for the cycle counter
+ *
+ * reset the timecounter to use a new base value instead of the kernel
+ * wall timer value.
+ */
+static int txgbe_ptp_settime64(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct txgbe_adapter *adapter =
+ container_of(ptp, struct txgbe_adapter, ptp_caps);
+ u64 ns;
+ unsigned long flags;
+
+ ns = timespec64_to_ns(ts);
+
+ /* reset the timecounter */
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ timecounter_init(&adapter->hw_tc, &adapter->hw_cc, ns);
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ return 0;
+}
+
+/**
+ * txgbe_ptp_feature_enable
+ * @ptp: the ptp clock structure
+ * @rq: the requested feature to change
+ * @on: whether to enable or disable the feature
+ *
+ * enable (or disable) ancillary features of the phc subsystem.
+ * our driver only supports the PPS feature on the X540
+ */
+static int txgbe_ptp_feature_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ return -ENOTSUPP;
+}
+
+/**
+ * txgbe_ptp_check_pps_event
+ * @adapter: the private adapter structure
+ * @eicr: the interrupt cause register value
+ *
+ * This function is called by the interrupt routine when checking for
+ * interrupts. It will check and handle a pps event.
+ */
+void txgbe_ptp_check_pps_event(struct txgbe_adapter *adapter)
+{
+ struct ptp_clock_event event;
+
+ event.type = PTP_CLOCK_PPS;
+
+ /* this check is necessary in case the interrupt was enabled via some
+ * alternative means (ex. debug_fs). Better to check here than
+ * everywhere that calls this function.
+ */
+ if (!adapter->ptp_clock)
+ return;
+
+ /* we don't config PPS on SDP yet, so just return.
+ * ptp_clock_event(adapter->ptp_clock, &event);
+ */
+}
+
+/**
+ * txgbe_ptp_overflow_check - watchdog task to detect SYSTIME overflow
+ * @adapter: private adapter struct
+ *
+ * this watchdog task periodically reads the timecounter
+ * in order to prevent missing when the system time registers wrap
+ * around. This needs to be run approximately twice a minute for the fastest
+ * overflowing hardware. We run it for all hardware since it shouldn't have a
+ * large impact.
+ */
+void txgbe_ptp_overflow_check(struct txgbe_adapter *adapter)
+{
+ bool timeout = time_is_before_jiffies(adapter->last_overflow_check +
+ TXGBE_OVERFLOW_PERIOD);
+ struct timespec64 ts;
+
+ if (timeout) {
+ txgbe_ptp_gettime64(&adapter->ptp_caps, &ts);
+ adapter->last_overflow_check = jiffies;
+ }
+}
+
+/**
+ * txgbe_ptp_rx_hang - detect error case when Rx timestamp registers latched
+ * @adapter: private network adapter structure
+ *
+ * this watchdog task is scheduled to detect error case where hardware has
+ * dropped an Rx packet that was timestamped when the ring is full. The
+ * particular error is rare but leaves the device in a state unable to timestamp
+ * any future packets.
+ */
+void txgbe_ptp_rx_hang(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct txgbe_ring *rx_ring;
+ u32 tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL);
+ unsigned long rx_event;
+ int n;
+
+ /* if we don't have a valid timestamp in the registers, just update the
+ * timeout counter and exit
+ */
+ if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID)) {
+ adapter->last_rx_ptp_check = jiffies;
+ return;
+ }
+
+ /* determine the most recent watchdog or rx_timestamp event */
+ rx_event = adapter->last_rx_ptp_check;
+ for (n = 0; n < adapter->num_rx_queues; n++) {
+ rx_ring = adapter->rx_ring[n];
+ if (time_after(rx_ring->last_rx_timestamp, rx_event))
+ rx_event = rx_ring->last_rx_timestamp;
+ }
+
+ /* only need to read the high RXSTMP register to clear the lock */
+ if (time_is_before_jiffies(rx_event + 5*HZ)) {
+ rd32(hw, TXGBE_PSR_1588_STMPH);
+ adapter->last_rx_ptp_check = jiffies;
+
+ adapter->rx_hwtstamp_cleared++;
+ e_warn(drv, "clearing RX Timestamp hang");
+ }
+}
+
+/**
+ * txgbe_ptp_clear_tx_timestamp - utility function to clear Tx timestamp state
+ * @adapter: the private adapter structure
+ *
+ * This function should be called whenever the state related to a Tx timestamp
+ * needs to be cleared. This helps ensure that all related bits are reset for
+ * the next Tx timestamp event.
+ */
+static void txgbe_ptp_clear_tx_timestamp(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+
+ rd32(hw, TXGBE_TSC_1588_STMPH);
+ if (adapter->ptp_tx_skb) {
+ dev_kfree_skb_any(adapter->ptp_tx_skb);
+ adapter->ptp_tx_skb = NULL;
+ }
+ clear_bit_unlock(__TXGBE_PTP_TX_IN_PROGRESS, &adapter->state);
+}
+
+/**
+ * txgbe_ptp_tx_hwtstamp - utility function which checks for TX time stamp
+ * @adapter: the private adapter struct
+ *
+ * if the timestamp is valid, we convert it into the timecounter ns
+ * value, then store that result into the shhwtstamps structure which
+ * is passed up the network stack
+ */
+static void txgbe_ptp_tx_hwtstamp(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ struct skb_shared_hwtstamps shhwtstamps;
+ u64 regval = 0;
+
+ regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPL);
+ regval |= (u64)rd32(hw, TXGBE_TSC_1588_STMPH) << 32;
+
+ txgbe_ptp_convert_to_hwtstamp(adapter, &shhwtstamps, regval);
+ skb_tstamp_tx(adapter->ptp_tx_skb, &shhwtstamps);
+
+ txgbe_ptp_clear_tx_timestamp(adapter);
+}
+
+/**
+ * txgbe_ptp_tx_hwtstamp_work
+ * @work: pointer to the work struct
+ *
+ * This work item polls TSYNCTXCTL valid bit to determine when a Tx hardware
+ * timestamp has been taken for the current skb. It is necesary, because the
+ * descriptor's "done" bit does not correlate with the timestamp event.
+ */
+static void txgbe_ptp_tx_hwtstamp_work(struct work_struct *work)
+{
+ struct txgbe_adapter *adapter = container_of(work, struct txgbe_adapter,
+ ptp_tx_work);
+ struct txgbe_hw *hw = &adapter->hw;
+ bool timeout = time_is_before_jiffies(adapter->ptp_tx_start +
+ TXGBE_PTP_TX_TIMEOUT);
+ u32 tsynctxctl;
+
+ /* we have to have a valid skb to poll for a timestamp */
+ if (!adapter->ptp_tx_skb) {
+ txgbe_ptp_clear_tx_timestamp(adapter);
+ return;
+ }
+
+ /* stop polling once we have a valid timestamp */
+ tsynctxctl = rd32(hw, TXGBE_TSC_1588_CTL);
+ if (tsynctxctl & TXGBE_TSC_1588_CTL_VALID) {
+ txgbe_ptp_tx_hwtstamp(adapter);
+ return;
+ }
+
+ /* check timeout last in case timestamp event just occurred */
+ if (timeout) {
+ txgbe_ptp_clear_tx_timestamp(adapter);
+ adapter->tx_hwtstamp_timeouts++;
+ e_warn(drv, "clearing Tx Timestamp hang");
+ } else {
+ /* reschedule to keep checking until we timeout */
+ schedule_work(&adapter->ptp_tx_work);
+ }
+}
+
+/**
+ * txgbe_ptp_rx_rgtstamp - utility function which checks for RX time stamp
+ * @q_vector: structure containing interrupt and ring information
+ * @skb: particular skb to send timestamp with
+ *
+ * if the timestamp is valid, we convert it into the timecounter ns
+ * value, then store that result into the shhwtstamps structure which
+ * is passed up the network stack
+ */
+void txgbe_ptp_rx_hwtstamp(struct txgbe_adapter *adapter, struct sk_buff *skb)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u64 regval = 0;
+ u32 tsyncrxctl;
+
+ /*
+ * Read the tsyncrxctl register afterwards in order to prevent taking an
+ * I/O hit on every packet.
+ */
+ tsyncrxctl = rd32(hw, TXGBE_PSR_1588_CTL);
+ if (!(tsyncrxctl & TXGBE_PSR_1588_CTL_VALID))
+ return;
+
+ regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPL);
+ regval |= (u64)rd32(hw, TXGBE_PSR_1588_STMPH) << 32;
+
+ txgbe_ptp_convert_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);
+}
+
+/**
+ * txgbe_ptp_get_ts_config - get current hardware timestamping configuration
+ * @adapter: pointer to adapter structure
+ * @ifreq: ioctl data
+ *
+ * This function returns the current timestamping settings. Rather than
+ * attempt to deconstruct registers to fill in the values, simply keep a copy
+ * of the old settings around, and return a copy when requested.
+ */
+int txgbe_ptp_get_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr)
+{
+ struct hwtstamp_config *config = &adapter->tstamp_config;
+
+ return copy_to_user(ifr->ifr_data, config,
+ sizeof(*config)) ? -EFAULT : 0;
+}
+
+/**
+ * txgbe_ptp_set_timestamp_mode - setup the hardware for the requested mode
+ * @adapter: the private txgbe adapter structure
+ * @config: the hwtstamp configuration requested
+ *
+ * Outgoing time stamping can be enabled and disabled. Play nice and
+ * disable it when requested, although it shouldn't cause any overhead
+ * when no packet needs it. At most one packet in the queue may be
+ * marked for time stamping, otherwise it would be impossible to tell
+ * for sure to which packet the hardware time stamp belongs.
+ *
+ * Incoming time stamping has to be configured via the hardware
+ * filters. Not all combinations are supported, in particular event
+ * type has to be specified. Matching the kind of event packet is
+ * not supported, with the exception of "all V2 events regardless of
+ * level 2 or 4".
+ *
+ * Since hardware always timestamps Path delay packets when timestamping V2
+ * packets, regardless of the type specified in the register, only use V2
+ * Event mode. This more accurately tells the user what the hardware is going
+ * to do anyways.
+ *
+ * Note: this may modify the hwtstamp configuration towards a more general
+ * mode, if required to support the specifically requested mode.
+ */
+static int txgbe_ptp_set_timestamp_mode(struct txgbe_adapter *adapter,
+ struct hwtstamp_config *config)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ u32 tsync_tx_ctl = TXGBE_TSC_1588_CTL_ENABLED;
+ u32 tsync_rx_ctl = TXGBE_PSR_1588_CTL_ENABLED;
+ u32 tsync_rx_mtrl = PTP_EV_PORT << 16;
+ bool is_l2 = false;
+ u32 regval;
+
+ /* reserved for future extensions */
+ if (config->flags)
+ return -EINVAL;
+
+ switch (config->tx_type) {
+ case HWTSTAMP_TX_OFF:
+ tsync_tx_ctl = 0;
+ case HWTSTAMP_TX_ON:
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (config->rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ tsync_rx_ctl = 0;
+ tsync_rx_mtrl = 0;
+ adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1;
+ tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG;
+ adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_L4_V1;
+ tsync_rx_mtrl |= TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG;
+ adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ break;
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ tsync_rx_ctl |= TXGBE_PSR_1588_CTL_TYPE_EVENT_V2;
+ is_l2 = true;
+ config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ adapter->flags |= (TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ case HWTSTAMP_FILTER_ALL:
+ default:
+ /* register RXMTRL must be set in order to do V1 packets,
+ * therefore it is not possible to time stamp both V1 Sync and
+ * Delay_Req messages unless hardware supports timestamping all
+ * packets => return error
+ */
+ adapter->flags &= ~(TXGBE_FLAG_RX_HWTSTAMP_ENABLED |
+ TXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER);
+ config->rx_filter = HWTSTAMP_FILTER_NONE;
+ return -ERANGE;
+ }
+
+ /* define ethertype filter for timestamping L2 packets */
+ if (is_l2)
+ wr32(hw,
+ TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588),
+ (TXGBE_PSR_ETYPE_SWC_FILTER_EN | /* enable filter */
+ TXGBE_PSR_ETYPE_SWC_1588 | /* enable timestamping */
+ ETH_P_1588)); /* 1588 eth protocol type */
+ else
+ wr32(hw,
+ TXGBE_PSR_ETYPE_SWC(TXGBE_PSR_ETYPE_SWC_FILTER_1588),
+ 0);
+
+ /* enable/disable TX */
+ regval = rd32(hw, TXGBE_TSC_1588_CTL);
+ regval &= ~TXGBE_TSC_1588_CTL_ENABLED;
+ regval |= tsync_tx_ctl;
+ wr32(hw, TXGBE_TSC_1588_CTL, regval);
+
+ /* enable/disable RX */
+ regval = rd32(hw, TXGBE_PSR_1588_CTL);
+ regval &= ~(TXGBE_PSR_1588_CTL_ENABLED | TXGBE_PSR_1588_CTL_TYPE_MASK);
+ regval |= tsync_rx_ctl;
+ wr32(hw, TXGBE_PSR_1588_CTL, regval);
+
+ /* define which PTP packets are time stamped */
+ wr32(hw, TXGBE_PSR_1588_MSGTYPE, tsync_rx_mtrl);
+
+ TXGBE_WRITE_FLUSH(hw);
+
+ /* clear TX/RX timestamp state, just to be sure */
+ txgbe_ptp_clear_tx_timestamp(adapter);
+ rd32(hw, TXGBE_PSR_1588_STMPH);
+
+ return 0;
+}
+
+/**
+ * txgbe_ptp_set_ts_config - user entry point for timestamp mode
+ * @adapter: pointer to adapter struct
+ * @ifreq: ioctl data
+ *
+ * Set hardware to requested mode. If unsupported, return an error with no
+ * changes. Otherwise, store the mode for future reference.
+ */
+int txgbe_ptp_set_ts_config(struct txgbe_adapter *adapter, struct ifreq *ifr)
+{
+ struct hwtstamp_config config;
+ int err;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ err = txgbe_ptp_set_timestamp_mode(adapter, &config);
+ if (err)
+ return err;
+
+ /* save these settings for future reference */
+ memcpy(&adapter->tstamp_config, &config,
+ sizeof(adapter->tstamp_config));
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+}
+
+static void txgbe_ptp_link_speed_adjust(struct txgbe_adapter *adapter,
+ u32 *shift, u32 *incval)
+{
+ /**
+ * Scale the NIC cycle counter by a large factor so that
+ * relatively small corrections to the frequency can be added
+ * or subtracted. The drawbacks of a large factor include
+ * (a) the clock register overflows more quickly, (b) the cycle
+ * counter structure must be able to convert the systime value
+ * to nanoseconds using only a multiplier and a right-shift,
+ * and (c) the value must fit within the timinca register space
+ * => math based on internal DMA clock rate and available bits
+ *
+ * Note that when there is no link, internal DMA clock is same as when
+ * link speed is 10Gb. Set the registers correctly even when link is
+ * down to preserve the clock setting
+ */
+ switch (adapter->link_speed) {
+ case TXGBE_LINK_SPEED_10_FULL:
+ *shift = TXGBE_INCVAL_SHIFT_10;
+ *incval = TXGBE_INCVAL_10;
+ break;
+ case TXGBE_LINK_SPEED_100_FULL:
+ *shift = TXGBE_INCVAL_SHIFT_100;
+ *incval = TXGBE_INCVAL_100;
+ break;
+ case TXGBE_LINK_SPEED_1GB_FULL:
+ *shift = TXGBE_INCVAL_SHIFT_FPGA;
+ *incval = TXGBE_INCVAL_FPGA;
+ break;
+ case TXGBE_LINK_SPEED_10GB_FULL:
+ default: /* TXGBE_LINK_SPEED_10GB_FULL */
+ *shift = TXGBE_INCVAL_SHIFT_10GB;
+ *incval = TXGBE_INCVAL_10GB;
+ break;
+ }
+
+ return;
+}
+
+/**
+ * txgbe_ptp_start_cyclecounter - create the cycle counter from hw
+ * @adapter: pointer to the adapter structure
+ *
+ * This function should be called to set the proper values for the TIMINCA
+ * register and tell the cyclecounter structure what the tick rate of SYSTIME
+ * is. It does not directly modify SYSTIME registers or the timecounter
+ * structure. It should be called whenever a new TIMINCA value is necessary,
+ * such as during initialization or when the link speed changes.
+ */
+void txgbe_ptp_start_cyclecounter(struct txgbe_adapter *adapter)
+{
+ struct txgbe_hw *hw = &adapter->hw;
+ unsigned long flags;
+ struct cyclecounter cc;
+ u32 incval = 0;
+
+ /* For some of the boards below this mask is technically incorrect.
+ * The timestamp mask overflows at approximately 61bits. However the
+ * particular hardware does not overflow on an even bitmask value.
+ * Instead, it overflows due to conversion of upper 32bits billions of
+ * cycles. Timecounters are not really intended for this purpose so
+ * they do not properly function if the overflow point isn't 2^N-1.
+ * However, the actual SYSTIME values in question take ~138 years to
+ * overflow. In practice this means they won't actually overflow. A
+ * proper fix to this problem would require modification of the
+ * timecounter delta calculations.
+ */
+ cc.mask = CLOCKSOURCE_MASK(64);
+ cc.mult = 1;
+ cc.shift = 0;
+
+ cc.read = txgbe_ptp_read;
+ txgbe_ptp_link_speed_adjust(adapter, &cc.shift, &incval);
+ wr32(hw, TXGBE_TSC_1588_INC,
+ TXGBE_TSC_1588_INC_IVP(incval, 2));
+
+ /* update the base incval used to calculate frequency adjustment */
+ WRITE_ONCE(adapter->base_incval, incval);
+ smp_mb();
+
+ /* need lock to prevent incorrect read while modifying cyclecounter */
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ memcpy(&adapter->hw_cc, &cc, sizeof(adapter->hw_cc));
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+}
+
+/**
+ * txgbe_ptp_reset
+ * @adapter: the txgbe private board structure
+ *
+ * When the MAC resets, all of the hardware configuration for timesync is
+ * reset. This function should be called to re-enable the device for PTP,
+ * using the last known settings. However, we do lose the current clock time,
+ * so we fallback to resetting it based on the kernel's realtime clock.
+ *
+ * This function will maintain the hwtstamp_config settings, and it retriggers
+ * the SDP output if it's enabled.
+ */
+void txgbe_ptp_reset(struct txgbe_adapter *adapter)
+{
+ unsigned long flags;
+
+ /* reset the hardware timestamping mode */
+ txgbe_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+ txgbe_ptp_start_cyclecounter(adapter);
+
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ timecounter_init(&adapter->hw_tc, &adapter->hw_cc,
+ ktime_to_ns(ktime_get_real()));
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ adapter->last_overflow_check = jiffies;
+}
+
+/**
+ * txgbe_ptp_create_clock
+ * @adapter: the txgbe private adapter structure
+ *
+ * This functino performs setup of the user entry point function table and
+ * initalizes the PTP clock device used by userspace to access the clock-like
+ * features of the PTP core. It will be called by txgbe_ptp_init, and may
+ * re-use a previously initialized clock (such as during a suspend/resume
+ * cycle).
+ */
+
+static long txgbe_ptp_create_clock(struct txgbe_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ long err;
+
+ /* do nothing if we already have a clock device */
+ if (!IS_ERR_OR_NULL(adapter->ptp_clock))
+ return 0;
+
+ snprintf(adapter->ptp_caps.name, sizeof(adapter->ptp_caps.name),
+ "%s", netdev->name);
+ adapter->ptp_caps.owner = THIS_MODULE;
+ adapter->ptp_caps.max_adj = 250000000; /* 10^-9s */
+ adapter->ptp_caps.n_alarm = 0;
+ adapter->ptp_caps.n_ext_ts = 0;
+ adapter->ptp_caps.n_per_out = 0;
+ adapter->ptp_caps.pps = 0;
+ adapter->ptp_caps.adjfreq = txgbe_ptp_adjfreq;
+ adapter->ptp_caps.adjtime = txgbe_ptp_adjtime;
+ adapter->ptp_caps.gettime64 = txgbe_ptp_gettime64;
+ adapter->ptp_caps.settime64 = txgbe_ptp_settime64;
+ adapter->ptp_caps.enable = txgbe_ptp_feature_enable;
+
+ adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps,
+ pci_dev_to_dev(adapter->pdev));
+ if (IS_ERR(adapter->ptp_clock)) {
+ err = PTR_ERR(adapter->ptp_clock);
+ adapter->ptp_clock = NULL;
+ e_dev_err("ptp_clock_register failed\n");
+ return err;
+ } else
+ e_dev_info("registered PHC device on %s\n", netdev->name);
+
+ /* Set the default timestamp mode to disabled here. We do this in
+ * create_clock instead of initialization, because we don't want to
+ * override the previous settings during a suspend/resume cycle.
+ */
+ adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+
+ return 0;
+}
+
+/**
+ * txgbe_ptp_init
+ * @adapter: the txgbe private adapter structure
+ *
+ * This function performs the required steps for enabling ptp
+ * support. If ptp support has already been loaded it simply calls the
+ * cyclecounter init routine and exits.
+ */
+void txgbe_ptp_init(struct txgbe_adapter *adapter)
+{
+ /* initialize the spin lock first, since the user might call the clock
+ * functions any time after we've initialized the ptp clock device.
+ */
+ spin_lock_init(&adapter->tmreg_lock);
+
+ /* obtain a ptp clock device, or re-use an existing device */
+ if (txgbe_ptp_create_clock(adapter))
+ return;
+
+ /* we have a clock, so we can intialize work for timestamps now */
+ INIT_WORK(&adapter->ptp_tx_work, txgbe_ptp_tx_hwtstamp_work);
+
+ /* reset the ptp related hardware bits */
+ txgbe_ptp_reset(adapter);
+
+ /* enter the TXGBE_PTP_RUNNING state */
+ set_bit(__TXGBE_PTP_RUNNING, &adapter->state);
+
+ return;
+}
+
+/**
+ * txgbe_ptp_suspend - stop ptp work items
+ * @adapter: pointer to adapter struct
+ *
+ * This function suspends ptp activity, and prevents more work from being
+ * generated, but does not destroy the clock device.
+ */
+void txgbe_ptp_suspend(struct txgbe_adapter *adapter)
+{
+ /* leave the TXGBE_PTP_RUNNING STATE */
+ if (!test_and_clear_bit(__TXGBE_PTP_RUNNING, &adapter->state))
+ return;
+
+ adapter->flags2 &= ~TXGBE_FLAG2_PTP_PPS_ENABLED;
+
+ cancel_work_sync(&adapter->ptp_tx_work);
+ txgbe_ptp_clear_tx_timestamp(adapter);
+}
+
+/**
+ * txgbe_ptp_stop - destroy the ptp_clock device
+ * @adapter: pointer to adapter struct
+ *
+ * Completely destroy the ptp_clock device, and disable all PTP related
+ * features. Intended to be run when the device is being closed.
+ */
+void txgbe_ptp_stop(struct txgbe_adapter *adapter)
+{
+ /* first, suspend ptp activity */
+ txgbe_ptp_suspend(adapter);
+
+ /* now destroy the ptp clock device */
+ if (adapter->ptp_clock) {
+ ptp_clock_unregister(adapter->ptp_clock);
+ adapter->ptp_clock = NULL;
+ e_dev_info("removed PHC on %s\n",
+ adapter->netdev->name);
+ }
+}
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_type.h b/drivers/net/ethernet/netswift/txgbe/txgbe_type.h
new file mode 100644
index 0000000000000..2f62819a848ad
--- /dev/null
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_type.h
@@ -0,0 +1,3213 @@
+/*
+ * WangXun 10 Gigabit PCI Express Linux driver
+ * Copyright (c) 2015 - 2017 Beijing WangXun Technology Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * based on ixgbe_type.h, Copyright(c) 1999 - 2017 Intel Corporation.
+ * Contact Information:
+ * Linux NICS <linux.nics(a)intel.com>
+ * e1000-devel Mailing List <e1000-devel(a)lists.sourceforge.net>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ */
+
+
+#ifndef _TXGBE_TYPE_H_
+#define _TXGBE_TYPE_H_
+
+#include <linux/types.h>
+#include <linux/mdio.h>
+#include <linux/netdevice.h>
+
+/*
+ * The following is a brief description of the error categories used by the
+ * ERROR_REPORT* macros.
+ *
+ * - TXGBE_ERROR_INVALID_STATE
+ * This category is for errors which represent a serious failure state that is
+ * unexpected, and could be potentially harmful to device operation. It should
+ * not be used for errors relating to issues that can be worked around or
+ * ignored.
+ *
+ * - TXGBE_ERROR_POLLING
+ * This category is for errors related to polling/timeout issues and should be
+ * used in any case where the timeout occured, or a failure to obtain a lock, or
+ * failure to receive data within the time limit.
+ *
+ * - TXGBE_ERROR_CAUTION
+ * This category should be used for reporting issues that may be the cause of
+ * other errors, such as temperature warnings. It should indicate an event which
+ * could be serious, but hasn't necessarily caused problems yet.
+ *
+ * - TXGBE_ERROR_SOFTWARE
+ * This category is intended for errors due to software state preventing
+ * something. The category is not intended for errors due to bad arguments, or
+ * due to unsupported features. It should be used when a state occurs which
+ * prevents action but is not a serious issue.
+ *
+ * - TXGBE_ERROR_ARGUMENT
+ * This category is for when a bad or invalid argument is passed. It should be
+ * used whenever a function is called and error checking has detected the
+ * argument is wrong or incorrect.
+ *
+ * - TXGBE_ERROR_UNSUPPORTED
+ * This category is for errors which are due to unsupported circumstances or
+ * configuration issues. It should not be used when the issue is due to an
+ * invalid argument, but for when something has occurred that is unsupported
+ * (Ex: Flow control autonegotiation or an unsupported SFP+ module.)
+ */
+
+#include "txgbe_mtd.h"
+
+/* Little Endian defines */
+#ifndef __le16
+#define __le16 u16
+#endif
+#ifndef __le32
+#define __le32 u32
+#endif
+#ifndef __le64
+#define __le64 u64
+
+#endif
+#ifndef __be16
+/* Big Endian defines */
+#define __be16 u16
+#define __be32 u32
+#define __be64 u64
+
+#endif
+
+/************ txgbe_register.h ************/
+/* Vendor ID */
+#ifndef PCI_VENDOR_ID_TRUSTNETIC
+#define PCI_VENDOR_ID_TRUSTNETIC 0x8088
+#endif
+
+/* Device IDs */
+#define TXGBE_DEV_ID_SP1000 0x1001
+#define TXGBE_DEV_ID_WX1820 0x2001
+
+/* Subsystem IDs */
+/* SFP */
+#define TXGBE_ID_SP1000_SFP 0x0000
+#define TXGBE_ID_WX1820_SFP 0x2000
+#define TXGBE_ID_SFP 0x00
+
+/* copper */
+#define TXGBE_ID_SP1000_XAUI 0x1010
+#define TXGBE_ID_WX1820_XAUI 0x2010
+#define TXGBE_ID_XAUI 0x10
+#define TXGBE_ID_SP1000_SGMII 0x1020
+#define TXGBE_ID_WX1820_SGMII 0x2020
+#define TXGBE_ID_SGMII 0x20
+/* backplane */
+#define TXGBE_ID_SP1000_KR_KX_KX4 0x1030
+#define TXGBE_ID_WX1820_KR_KX_KX4 0x2030
+#define TXGBE_ID_KR_KX_KX4 0x30
+/* MAC Interface */
+#define TXGBE_ID_SP1000_MAC_XAUI 0x1040
+#define TXGBE_ID_WX1820_MAC_XAUI 0x2040
+#define TXGBE_ID_MAC_XAUI 0x40
+#define TXGBE_ID_SP1000_MAC_SGMII 0x1060
+#define TXGBE_ID_WX1820_MAC_SGMII 0x2060
+#define TXGBE_ID_MAC_SGMII 0x60
+
+#define TXGBE_NCSI_SUP 0x8000
+#define TXGBE_NCSI_MASK 0x8000
+#define TXGBE_WOL_SUP 0x4000
+#define TXGBE_WOL_MASK 0x4000
+
+
+/* Combined interface*/
+#define TXGBE_ID_SFI_XAUI 0x50
+
+/* Revision ID */
+#define TXGBE_SP_MPW 1
+
+/* MDIO Manageable Devices (MMDs). */
+#define TXGBE_MDIO_PMA_PMD_DEV_TYPE 0x1 /* PMA and PMD */
+#define TXGBE_MDIO_PCS_DEV_TYPE 0x3 /* Physical Coding Sublayer*/
+#define TXGBE_MDIO_PHY_XS_DEV_TYPE 0x4 /* PHY Extender Sublayer */
+#define TXGBE_MDIO_AUTO_NEG_DEV_TYPE 0x7 /* Auto-Negotiation */
+#define TXGBE_MDIO_VENDOR_SPECIFIC_1_DEV_TYPE 0x1E /* Vendor specific 1 */
+
+/* phy register definitions */
+/* VENDOR_SPECIFIC_1_DEV regs */
+#define TXGBE_MDIO_VENDOR_SPECIFIC_1_STATUS 0x1 /* VS1 Status Reg */
+#define TXGBE_MDIO_VENDOR_SPECIFIC_1_LINK_STATUS 0x0008 /* 1 = Link Up */
+#define TXGBE_MDIO_VENDOR_SPECIFIC_1_SPEED_STATUS 0x0010 /* 0-10G, 1-1G */
+
+/* AUTO_NEG_DEV regs */
+#define TXGBE_MDIO_AUTO_NEG_CONTROL 0x0 /* AUTO_NEG Control Reg */
+#define TXGBE_MDIO_AUTO_NEG_ADVT 0x10 /* AUTO_NEG Advt Reg */
+#define TXGBE_MDIO_AUTO_NEG_LP 0x13 /* AUTO_NEG LP Reg */
+#define TXGBE_MDIO_AUTO_NEG_LP_STATUS 0xE820 /* AUTO NEG RX LP Status
+ * Reg */
+#define TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG 0x20 /* 10G Control Reg */
+#define TXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG 0xC400 /* 1G Provisioning 1 */
+#define TXGBE_MII_AUTONEG_XNP_TX_REG 0x17 /* 1G XNP Transmit */
+#define TXGBE_MII_AUTONEG_ADVERTISE_REG 0x10 /* 100M Advertisement */
+
+
+#define TXGBE_MDIO_AUTO_NEG_10GBASE_EEE_ADVT 0x8
+#define TXGBE_MDIO_AUTO_NEG_1000BASE_EEE_ADVT 0x4
+#define TXGBE_MDIO_AUTO_NEG_100BASE_EEE_ADVT 0x2
+#define TXGBE_MDIO_AUTO_NEG_LP_1000BASE_CAP 0x8000
+#define TXGBE_MDIO_AUTO_NEG_LP_10GBASE_CAP 0x0800
+#define TXGBE_MDIO_AUTO_NEG_10GBASET_STAT 0x0021
+
+#define TXGBE_MII_10GBASE_T_ADVERTISE 0x1000 /* full duplex, bit:12*/
+#define TXGBE_MII_1GBASE_T_ADVERTISE_XNP_TX 0x4000 /* full duplex, bit:14*/
+#define TXGBE_MII_1GBASE_T_ADVERTISE 0x8000 /* full duplex, bit:15*/
+#define TXGBE_MII_100BASE_T_ADVERTISE 0x0100 /* full duplex, bit:8 */
+#define TXGBE_MII_100BASE_T_ADVERTISE_HALF 0x0080 /* half duplex, bit:7 */
+#define TXGBE_MII_RESTART 0x200
+#define TXGBE_MII_AUTONEG_COMPLETE 0x20
+#define TXGBE_MII_AUTONEG_LINK_UP 0x04
+#define TXGBE_MII_AUTONEG_REG 0x0
+
+/* PHY_XS_DEV regs */
+#define TXGBE_MDIO_PHY_XS_CONTROL 0x0 /* PHY_XS Control Reg */
+#define TXGBE_MDIO_PHY_XS_RESET 0x8000 /* PHY_XS Reset */
+
+/* Media-dependent registers. */
+#define TXGBE_MDIO_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/
+#define TXGBE_MDIO_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/
+#define TXGBE_MDIO_PHY_SPEED_ABILITY 0x4 /* Speed Ability Reg */
+#define TXGBE_MDIO_PHY_EXT_ABILITY 0xB /* Ext Ability Reg */
+
+#define TXGBE_MDIO_PHY_SPEED_10G 0x0001 /* 10G capable */
+#define TXGBE_MDIO_PHY_SPEED_1G 0x0010 /* 1G capable */
+#define TXGBE_MDIO_PHY_SPEED_100M 0x0020 /* 100M capable */
+#define TXGBE_MDIO_PHY_SPEED_10M 0x0040 /* 10M capable */
+
+#define TXGBE_MDIO_PHY_10GBASET_ABILITY 0x0004 /* 10GBaseT capable */
+#define TXGBE_MDIO_PHY_1000BASET_ABILITY 0x0020 /* 1000BaseT capable */
+#define TXGBE_MDIO_PHY_100BASETX_ABILITY 0x0080 /* 100BaseTX capable */
+
+#define TXGBE_PHY_REVISION_MASK 0xFFFFFFF0U
+#define TXGBE_MAX_PHY_ADDR 32
+
+/* PHY IDs*/
+#define TN1010_PHY_ID 0x00A19410U
+#define QT2022_PHY_ID 0x0043A400U
+#define ATH_PHY_ID 0x03429050U
+/* PHY FW revision */
+#define TNX_FW_REV 0xB
+#define AQ_FW_REV 0x20
+
+/* ETH PHY Registers */
+#define TXGBE_SR_XS_PCS_MMD_STATUS1 0x30001
+#define TXGBE_SR_PCS_CTL2 0x30007
+#define TXGBE_SR_PMA_MMD_CTL1 0x10000
+#define TXGBE_SR_MII_MMD_CTL 0x1F0000
+#define TXGBE_SR_MII_MMD_DIGI_CTL 0x1F8000
+#define TXGBE_SR_MII_MMD_AN_CTL 0x1F8001
+#define TXGBE_SR_MII_MMD_AN_ADV 0x1F0004
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE(_v) ((0x3 & (_v)) << 7)
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_ASM 0x80
+#define TXGBE_SR_MII_MMD_AN_ADV_PAUSE_SYM 0x100
+#define TXGBE_SR_MII_MMD_LP_BABL 0x1F0005
+#define TXGBE_SR_AN_MMD_CTL 0x70000
+#define TXGBE_SR_AN_MMD_ADV_REG1 0x70010
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE(_v) ((0x3 & (_v)) << 10)
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_SYM 0x400
+#define TXGBE_SR_AN_MMD_ADV_REG1_PAUSE_ASM 0x800
+#define TXGBE_SR_AN_MMD_ADV_REG2 0x70011
+#define TXGBE_SR_AN_MMD_LP_ABL1 0x70013
+#define TXGBE_VR_AN_KR_MODE_CL 0x78003
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1 0x38000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS 0x38010
+#define TXGBE_PHY_MPLLA_CTL0 0x18071
+#define TXGBE_PHY_MPLLA_CTL3 0x18077
+#define TXGBE_PHY_MISC_CTL0 0x18090
+#define TXGBE_PHY_VCO_CAL_LD0 0x18092
+#define TXGBE_PHY_VCO_CAL_LD1 0x18093
+#define TXGBE_PHY_VCO_CAL_LD2 0x18094
+#define TXGBE_PHY_VCO_CAL_LD3 0x18095
+#define TXGBE_PHY_VCO_CAL_REF0 0x18096
+#define TXGBE_PHY_VCO_CAL_REF1 0x18097
+#define TXGBE_PHY_RX_AD_ACK 0x18098
+#define TXGBE_PHY_AFE_DFE_ENABLE 0x1805D
+#define TXGBE_PHY_DFE_TAP_CTL0 0x1805E
+#define TXGBE_PHY_RX_EQ_ATT_LVL0 0x18057
+#define TXGBE_PHY_RX_EQ_CTL0 0x18058
+#define TXGBE_PHY_RX_EQ_CTL 0x1805C
+#define TXGBE_PHY_TX_EQ_CTL0 0x18036
+#define TXGBE_PHY_TX_EQ_CTL1 0x18037
+#define TXGBE_PHY_TX_RATE_CTL 0x18034
+#define TXGBE_PHY_RX_RATE_CTL 0x18054
+#define TXGBE_PHY_TX_GEN_CTL2 0x18032
+#define TXGBE_PHY_RX_GEN_CTL2 0x18052
+#define TXGBE_PHY_RX_GEN_CTL3 0x18053
+#define TXGBE_PHY_MPLLA_CTL2 0x18073
+#define TXGBE_PHY_RX_POWER_ST_CTL 0x18055
+#define TXGBE_PHY_TX_POWER_ST_CTL 0x18035
+#define TXGBE_PHY_TX_GENCTRL1 0x18031
+
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_R 0x0
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_X 0x1
+#define TXGBE_SR_PCS_CTL2_PCS_TYPE_SEL_MASK 0x3
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_1G 0x0
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_10G 0x2000
+#define TXGBE_SR_PMA_MMD_CTL1_SPEED_SEL_MASK 0x2000
+#define TXGBE_SR_PMA_MMD_CTL1_LB_EN 0x1
+#define TXGBE_SR_MII_MMD_CTL_AN_EN 0x1000
+#define TXGBE_SR_MII_MMD_CTL_RESTART_AN 0x0200
+#define TXGBE_SR_AN_MMD_CTL_RESTART_AN 0x0200
+#define TXGBE_SR_AN_MMD_CTL_ENABLE 0x1000
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX4 0x40
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KX 0x20
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_KR 0x80
+#define TXGBE_SR_AN_MMD_ADV_REG2_BP_TYPE_MASK 0xFFFF
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_ENABLE 0x1000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST 0x8000
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK 0x1C
+#define TXGBE_VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD 0x10
+
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX 32
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_10GBASER_KR 33
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER 40
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_MASK 0xFF
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX 0x56
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR 0x7B
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER 0x56
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_MASK 0x7FF
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0 0x1
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1 0xE
+#define TXGBE_PHY_MISC_CTL0_RX_VREF_CTRL 0x1F00
+#define TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX 1344
+#define TXGBE_PHY_VCO_CAL_LD0_10GBASER_KR 1353
+#define TXGBE_PHY_VCO_CAL_LD0_OTHER 1360
+#define TXGBE_PHY_VCO_CAL_LD0_MASK 0x1000
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX 42
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_10GBASER_KR 41
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_OTHER 34
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_MASK 0x3F
+#define TXGBE_PHY_AFE_DFE_ENABLE_DFE_EN0 0x10
+#define TXGBE_PHY_AFE_DFE_ENABLE_AFE_EN0 0x1
+#define TXGBE_PHY_AFE_DFE_ENABLE_MASK 0xFF
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT0 0x1
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT_MASK 0xF
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_10GBASER_KR 0x0
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_RXAUI 0x1
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX 0x3
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_OTHER 0x2
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_OTHER 0x20
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_OTHER 0x200
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_OTHER 0x2000
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_MASK 0x7
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_MASK 0x70
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_MASK 0x700
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_MASK 0x7000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_10GBASER_KR 0x0
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_RXAUI 0x1
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX 0x3
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_OTHER 0x2
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_OTHER 0x20
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_OTHER 0x200
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_OTHER 0x2000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_MASK 0x7
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_MASK 0x70
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_MASK 0x700
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_MASK 0x7000
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR 0x200
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR_RXAUI 0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER 0x100
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_MASK 0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_OTHER 0x400
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_MASK 0xC00
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_OTHER 0x1000
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_MASK 0x3000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_OTHER 0x4000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_MASK 0xC000
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR 0x200
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR_RXAUI 0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER 0x100
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_MASK 0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_OTHER 0x400
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_MASK 0xC00
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_OTHER 0x1000
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_MASK 0x3000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_OTHER 0x4000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_MASK 0xC000
+
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_8 0x100
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10 0x200
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_16P5 0x400
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_MASK 0x700
+
+#define TXGBE_XPCS_POWER_GOOD_MAX_POLLING_TIME 100
+#define TXGBE_PHY_INIT_DONE_POLLING_TIME 100
+
+/**************** Global Registers ****************************/
+/* chip control Registers */
+#define TXGBE_MIS_RST 0x1000C
+#define TXGBE_MIS_PWR 0x10000
+#define TXGBE_MIS_CTL 0x10004
+#define TXGBE_MIS_PF_SM 0x10008
+#define TXGBE_MIS_ST 0x10028
+#define TXGBE_MIS_SWSM 0x1002C
+#define TXGBE_MIS_RST_ST 0x10030
+
+#define TXGBE_MIS_RST_SW_RST 0x00000001U
+#define TXGBE_MIS_RST_LAN0_RST 0x00000002U
+#define TXGBE_MIS_RST_LAN1_RST 0x00000004U
+#define TXGBE_MIS_RST_LAN0_CHG_ETH_MODE 0x20000000U
+#define TXGBE_MIS_RST_LAN1_CHG_ETH_MODE 0x40000000U
+#define TXGBE_MIS_RST_GLOBAL_RST 0x80000000U
+#define TXGBE_MIS_RST_MASK (TXGBE_MIS_RST_SW_RST | \
+ TXGBE_MIS_RST_LAN0_RST | \
+ TXGBE_MIS_RST_LAN1_RST)
+#define TXGBE_MIS_PWR_LAN_ID(_r) ((0xC0000000U & (_r)) >> 30)
+#define TXGBE_MIS_PWR_LAN_ID_0 (1)
+#define TXGBE_MIS_PWR_LAN_ID_1 (2)
+#define TXGBE_MIS_PWR_LAN_ID_A (3)
+#define TXGBE_MIS_ST_MNG_INIT_DN 0x00000001U
+#define TXGBE_MIS_ST_MNG_VETO 0x00000100U
+#define TXGBE_MIS_ST_LAN0_ECC 0x00010000U
+#define TXGBE_MIS_ST_LAN1_ECC 0x00020000U
+#define TXGBE_MIS_ST_MNG_ECC 0x00040000U
+#define TXGBE_MIS_ST_PCORE_ECC 0x00080000U
+#define TXGBE_MIS_ST_PCIWRP_ECC 0x00100000U
+#define TXGBE_MIS_SWSM_SMBI 1
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_DONE 0x00000000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_REQ 0x00080000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_INPROGRESS 0x00100000U
+#define TXGBE_MIS_RST_ST_DEV_RST_ST_MASK 0x00180000U
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_MASK 0x00070000U
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SHIFT 16
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_SW_RST 0x3
+#define TXGBE_MIS_RST_ST_DEV_RST_TYPE_GLOBAL_RST 0x5
+#define TXGBE_MIS_RST_ST_RST_INIT 0x0000FF00U
+#define TXGBE_MIS_RST_ST_RST_INI_SHIFT 8
+#define TXGBE_MIS_RST_ST_RST_TIM 0x000000FFU
+#define TXGBE_MIS_PF_SM_SM 1
+
+/* Sensors for PVT(Process Voltage Temperature) */
+#define TXGBE_TS_CTL 0x10300
+#define TXGBE_TS_EN 0x10304
+#define TXGBE_TS_ST 0x10308
+#define TXGBE_TS_ALARM_THRE 0x1030C
+#define TXGBE_TS_DALARM_THRE 0x10310
+#define TXGBE_TS_INT_EN 0x10314
+#define TXGBE_TS_ALARM_ST 0x10318
+#define TXGBE_TS_ALARM_ST_DALARM 0x00000002U
+#define TXGBE_TS_ALARM_ST_ALARM 0x00000001U
+
+#define TXGBE_TS_CTL_EVAL_MD 0x80000000U
+#define TXGBE_TS_EN_ENA 0x00000001U
+#define TXGBE_TS_ST_DATA_OUT_MASK 0x000003FFU
+#define TXGBE_TS_ALARM_THRE_MASK 0x000003FFU
+#define TXGBE_TS_DALARM_THRE_MASK 0x000003FFU
+#define TXGBE_TS_INT_EN_DALARM_INT_EN 0x00000002U
+#define TXGBE_TS_INT_EN_ALARM_INT_EN 0x00000001U
+
+struct txgbe_thermal_diode_data {
+ s16 temp;
+ s16 alarm_thresh;
+ s16 dalarm_thresh;
+};
+
+struct txgbe_thermal_sensor_data {
+ struct txgbe_thermal_diode_data sensor;
+};
+
+
+/* FMGR Registers */
+#define TXGBE_SPI_ILDR_STATUS 0x10120
+#define TXGBE_SPI_ILDR_STATUS_PERST 0x00000001U /* PCIE_PERST is done */
+#define TXGBE_SPI_ILDR_STATUS_PWRRST 0x00000002U /* Power on reset is done */
+#define TXGBE_SPI_ILDR_STATUS_SW_RESET 0x00000080U /* software reset is done */
+#define TXGBE_SPI_ILDR_STATUS_LAN0_SW_RST 0x00000200U /* lan0 soft reset done */
+#define TXGBE_SPI_ILDR_STATUS_LAN1_SW_RST 0x00000400U /* lan1 soft reset done */
+
+#define TXGBE_MAX_FLASH_LOAD_POLL_TIME 10
+
+#define TXGBE_SPI_CMD 0x10104
+#define TXGBE_SPI_CMD_CMD(_v) (((_v) & 0x7) << 28)
+#define TXGBE_SPI_CMD_CLK(_v) (((_v) & 0x7) << 25)
+#define TXGBE_SPI_CMD_ADDR(_v) (((_v) & 0xFFFFFF))
+#define TXGBE_SPI_DATA 0x10108
+#define TXGBE_SPI_DATA_BYPASS ((0x1) << 31)
+#define TXGBE_SPI_DATA_STATUS(_v) (((_v) & 0xFF) << 16)
+#define TXGBE_SPI_DATA_OP_DONE ((0x1))
+
+#define TXGBE_SPI_STATUS 0x1010C
+#define TXGBE_SPI_STATUS_OPDONE ((0x1))
+#define TXGBE_SPI_STATUS_FLASH_BYPASS ((0x1) << 31)
+
+#define TXGBE_SPI_USR_CMD 0x10110
+#define TXGBE_SPI_CMDCFG0 0x10114
+#define TXGBE_SPI_CMDCFG1 0x10118
+#define TXGBE_SPI_ECC_CTL 0x10130
+#define TXGBE_SPI_ECC_INJ 0x10134
+#define TXGBE_SPI_ECC_ST 0x10138
+#define TXGBE_SPI_ILDR_SWPTR 0x10124
+
+/************************* Port Registers ************************************/
+/* I2C registers */
+#define TXGBE_I2C_CON 0x14900 /* I2C Control */
+#define TXGBE_I2C_CON_SLAVE_DISABLE ((1 << 6))
+#define TXGBE_I2C_CON_RESTART_EN ((1 << 5))
+#define TXGBE_I2C_CON_10BITADDR_MASTER ((1 << 4))
+#define TXGBE_I2C_CON_10BITADDR_SLAVE ((1 << 3))
+#define TXGBE_I2C_CON_SPEED(_v) (((_v) & 0x3) << 1)
+#define TXGBE_I2C_CON_MASTER_MODE ((1 << 0))
+#define TXGBE_I2C_TAR 0x14904 /* I2C Target Address */
+#define TXGBE_I2C_DATA_CMD 0x14910 /* I2C Rx/Tx Data Buf and Cmd */
+#define TXGBE_I2C_DATA_CMD_STOP ((1 << 9))
+#define TXGBE_I2C_DATA_CMD_READ ((1 << 8) | TXGBE_I2C_DATA_CMD_STOP)
+#define TXGBE_I2C_DATA_CMD_WRITE ((0 << 8) | TXGBE_I2C_DATA_CMD_STOP)
+#define TXGBE_I2C_SS_SCL_HCNT 0x14914 /* Standard speed I2C Clock SCL
+ * High Count */
+#define TXGBE_I2C_SS_SCL_LCNT 0x14918 /* Standard speed I2C Clock SCL
+ * Low Count */
+#define TXGBE_I2C_FS_SCL_HCNT 0x1491C /* Fast Mode and Fast Mode Plus
+ * I2C Clock SCL High Count */
+#define TXGBE_I2C_FS_SCL_LCNT 0x14920 /* Fast Mode and Fast Mode Plus
+ * I2C Clock SCL Low Count */
+#define TXGBE_I2C_HS_SCL_HCNT 0x14924 /* High speed I2C Clock SCL
+ * High Count */
+#define TXGBE_I2C_HS_SCL_LCNT 0x14928 /* High speed I2C Clock SCL Low
+ * Count */
+#define TXGBE_I2C_INTR_STAT 0x1492C /* I2C Interrupt Status */
+#define TXGBE_I2C_RAW_INTR_STAT 0x14934 /* I2C Raw Interrupt Status */
+#define TXGBE_I2C_INTR_STAT_RX_FULL ((0x1) << 2)
+#define TXGBE_I2C_INTR_STAT_TX_EMPTY ((0x1) << 4)
+#define TXGBE_I2C_INTR_MASK 0x14930 /* I2C Interrupt Mask */
+#define TXGBE_I2C_RX_TL 0x14938 /* I2C Receive FIFO Threshold */
+#define TXGBE_I2C_TX_TL 0x1493C /* I2C TX FIFO Threshold */
+#define TXGBE_I2C_CLR_INTR 0x14940 /* Clear Combined and Individual
+ * Int */
+#define TXGBE_I2C_CLR_RX_UNDER 0x14944 /* Clear RX_UNDER Interrupt */
+#define TXGBE_I2C_CLR_RX_OVER 0x14948 /* Clear RX_OVER Interrupt */
+#define TXGBE_I2C_CLR_TX_OVER 0x1494C /* Clear TX_OVER Interrupt */
+#define TXGBE_I2C_CLR_RD_REQ 0x14950 /* Clear RD_REQ Interrupt */
+#define TXGBE_I2C_CLR_TX_ABRT 0x14954 /* Clear TX_ABRT Interrupt */
+#define TXGBE_I2C_CLR_RX_DONE 0x14958 /* Clear RX_DONE Interrupt */
+#define TXGBE_I2C_CLR_ACTIVITY 0x1495C /* Clear ACTIVITY Interrupt */
+#define TXGBE_I2C_CLR_STOP_DET 0x14960 /* Clear STOP_DET Interrupt */
+#define TXGBE_I2C_CLR_START_DET 0x14964 /* Clear START_DET Interrupt */
+#define TXGBE_I2C_CLR_GEN_CALL 0x14968 /* Clear GEN_CALL Interrupt */
+#define TXGBE_I2C_ENABLE 0x1496C /* I2C Enable */
+#define TXGBE_I2C_STATUS 0x14970 /* I2C Status register */
+#define TXGBE_I2C_STATUS_MST_ACTIVITY ((1U << 5))
+#define TXGBE_I2C_TXFLR 0x14974 /* Transmit FIFO Level Reg */
+#define TXGBE_I2C_RXFLR 0x14978 /* Receive FIFO Level Reg */
+#define TXGBE_I2C_SDA_HOLD 0x1497C /* SDA hold time length reg */
+#define TXGBE_I2C_TX_ABRT_SOURCE 0x14980 /* I2C TX Abort Status Reg */
+#define TXGBE_I2C_SDA_SETUP 0x14994 /* I2C SDA Setup Register */
+#define TXGBE_I2C_ENABLE_STATUS 0x1499C /* I2C Enable Status Register */
+#define TXGBE_I2C_FS_SPKLEN 0x149A0 /* ISS and FS spike suppression
+ * limit */
+#define TXGBE_I2C_HS_SPKLEN 0x149A4 /* HS spike suppression limit */
+#define TXGBE_I2C_SCL_STUCK_TIMEOUT 0x149AC /* I2C SCL stuck at low timeout
+ * register */
+#define TXGBE_I2C_SDA_STUCK_TIMEOUT 0x149B0 /*I2C SDA Stuck at Low Timeout*/
+#define TXGBE_I2C_CLR_SCL_STUCK_DET 0x149B4 /* Clear SCL Stuck at Low Detect
+ * Interrupt */
+#define TXGBE_I2C_DEVICE_ID 0x149b8 /* I2C Device ID */
+#define TXGBE_I2C_COMP_PARAM_1 0x149f4 /* Component Parameter Reg */
+#define TXGBE_I2C_COMP_VERSION 0x149f8 /* Component Version ID */
+#define TXGBE_I2C_COMP_TYPE 0x149fc /* DesignWare Component Type
+ * Reg */
+
+#define TXGBE_I2C_SLAVE_ADDR (0xA0 >> 1)
+#define TXGBE_I2C_THERMAL_SENSOR_ADDR 0xF8
+
+
+/* port cfg Registers */
+#define TXGBE_CFG_PORT_CTL 0x14400
+#define TXGBE_CFG_PORT_ST 0x14404
+#define TXGBE_CFG_EX_VTYPE 0x14408
+#define TXGBE_CFG_LED_CTL 0x14424
+#define TXGBE_CFG_VXLAN 0x14410
+#define TXGBE_CFG_VXLAN_GPE 0x14414
+#define TXGBE_CFG_GENEVE 0x14418
+#define TXGBE_CFG_TEREDO 0x1441C
+#define TXGBE_CFG_TCP_TIME 0x14420
+#define TXGBE_CFG_TAG_TPID(_i) (0x14430 + ((_i) * 4))
+/* port cfg bit */
+#define TXGBE_CFG_PORT_CTL_PFRSTD 0x00004000U /* Phy Function Reset Done */
+#define TXGBE_CFG_PORT_CTL_D_VLAN 0x00000001U /* double vlan*/
+#define TXGBE_CFG_PORT_CTL_ETAG_ETYPE_VLD 0x00000002U
+#define TXGBE_CFG_PORT_CTL_QINQ 0x00000004U
+#define TXGBE_CFG_PORT_CTL_DRV_LOAD 0x00000008U
+#define TXGBE_CFG_PORT_CTL_FORCE_LKUP 0x00000010U /* force link up */
+#define TXGBE_CFG_PORT_CTL_DCB_EN 0x00000400U /* dcb enabled */
+#define TXGBE_CFG_PORT_CTL_NUM_TC_MASK 0x00000800U /* number of TCs */
+#define TXGBE_CFG_PORT_CTL_NUM_TC_4 0x00000000U
+#define TXGBE_CFG_PORT_CTL_NUM_TC_8 0x00000800U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_MASK 0x00003000U /* number of TVs */
+#define TXGBE_CFG_PORT_CTL_NUM_VT_NONE 0x00000000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_16 0x00001000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_32 0x00002000U
+#define TXGBE_CFG_PORT_CTL_NUM_VT_64 0x00003000U
+/* Status Bit */
+#define TXGBE_CFG_PORT_ST_LINK_UP 0x00000001U
+#define TXGBE_CFG_PORT_ST_LINK_10G 0x00000002U
+#define TXGBE_CFG_PORT_ST_LINK_1G 0x00000004U
+#define TXGBE_CFG_PORT_ST_LINK_100M 0x00000008U
+#define TXGBE_CFG_PORT_ST_LAN_ID(_r) ((0x00000100U & (_r)) >> 8)
+#define TXGBE_LINK_UP_TIME 90
+/* LED CTL Bit */
+#define TXGBE_CFG_LED_CTL_LINK_BSY_SEL 0x00000010U
+#define TXGBE_CFG_LED_CTL_LINK_100M_SEL 0x00000008U
+#define TXGBE_CFG_LED_CTL_LINK_1G_SEL 0x00000004U
+#define TXGBE_CFG_LED_CTL_LINK_10G_SEL 0x00000002U
+#define TXGBE_CFG_LED_CTL_LINK_UP_SEL 0x00000001U
+#define TXGBE_CFG_LED_CTL_LINK_OD_SHIFT 16
+/* LED modes */
+#define TXGBE_LED_LINK_UP TXGBE_CFG_LED_CTL_LINK_UP_SEL
+#define TXGBE_LED_LINK_10G TXGBE_CFG_LED_CTL_LINK_10G_SEL
+#define TXGBE_LED_LINK_ACTIVE TXGBE_CFG_LED_CTL_LINK_BSY_SEL
+#define TXGBE_LED_LINK_1G TXGBE_CFG_LED_CTL_LINK_1G_SEL
+#define TXGBE_LED_LINK_100M TXGBE_CFG_LED_CTL_LINK_100M_SEL
+
+/* GPIO Registers */
+#define TXGBE_GPIO_DR 0x14800
+#define TXGBE_GPIO_DDR 0x14804
+#define TXGBE_GPIO_CTL 0x14808
+#define TXGBE_GPIO_INTEN 0x14830
+#define TXGBE_GPIO_INTMASK 0x14834
+#define TXGBE_GPIO_INTTYPE_LEVEL 0x14838
+#define TXGBE_GPIO_INTSTATUS 0x14844
+#define TXGBE_GPIO_EOI 0x1484C
+/*GPIO bit */
+#define TXGBE_GPIO_DR_0 0x00000001U /* SDP0 Data Value */
+#define TXGBE_GPIO_DR_1 0x00000002U /* SDP1 Data Value */
+#define TXGBE_GPIO_DR_2 0x00000004U /* SDP2 Data Value */
+#define TXGBE_GPIO_DR_3 0x00000008U /* SDP3 Data Value */
+#define TXGBE_GPIO_DR_4 0x00000010U /* SDP4 Data Value */
+#define TXGBE_GPIO_DR_5 0x00000020U /* SDP5 Data Value */
+#define TXGBE_GPIO_DR_6 0x00000040U /* SDP6 Data Value */
+#define TXGBE_GPIO_DR_7 0x00000080U /* SDP7 Data Value */
+#define TXGBE_GPIO_DDR_0 0x00000001U /* SDP0 IO direction */
+#define TXGBE_GPIO_DDR_1 0x00000002U /* SDP1 IO direction */
+#define TXGBE_GPIO_DDR_2 0x00000004U /* SDP1 IO direction */
+#define TXGBE_GPIO_DDR_3 0x00000008U /* SDP3 IO direction */
+#define TXGBE_GPIO_DDR_4 0x00000010U /* SDP4 IO direction */
+#define TXGBE_GPIO_DDR_5 0x00000020U /* SDP5 IO direction */
+#define TXGBE_GPIO_DDR_6 0x00000040U /* SDP6 IO direction */
+#define TXGBE_GPIO_DDR_7 0x00000080U /* SDP7 IO direction */
+#define TXGBE_GPIO_CTL_SW_MODE 0x00000000U /* SDP software mode */
+#define TXGBE_GPIO_INTEN_1 0x00000002U /* SDP1 interrupt enable */
+#define TXGBE_GPIO_INTEN_2 0x00000004U /* SDP2 interrupt enable */
+#define TXGBE_GPIO_INTEN_3 0x00000008U /* SDP3 interrupt enable */
+#define TXGBE_GPIO_INTEN_5 0x00000020U /* SDP5 interrupt enable */
+#define TXGBE_GPIO_INTEN_6 0x00000040U /* SDP6 interrupt enable */
+#define TXGBE_GPIO_INTTYPE_LEVEL_2 0x00000004U /* SDP2 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_3 0x00000008U /* SDP3 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_5 0x00000020U /* SDP5 interrupt type level */
+#define TXGBE_GPIO_INTTYPE_LEVEL_6 0x00000040U /* SDP6 interrupt type level */
+#define TXGBE_GPIO_INTSTATUS_1 0x00000002U /* SDP1 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_2 0x00000004U /* SDP2 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_3 0x00000008U /* SDP3 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_5 0x00000020U /* SDP5 interrupt status */
+#define TXGBE_GPIO_INTSTATUS_6 0x00000040U /* SDP6 interrupt status */
+#define TXGBE_GPIO_EOI_2 0x00000004U /* SDP2 interrupt clear */
+#define TXGBE_GPIO_EOI_3 0x00000008U /* SDP3 interrupt clear */
+#define TXGBE_GPIO_EOI_5 0x00000020U /* SDP5 interrupt clear */
+#define TXGBE_GPIO_EOI_6 0x00000040U /* SDP6 interrupt clear */
+
+/* TPH registers */
+#define TXGBE_CFG_TPH_TDESC 0x14F00 /* TPH conf for Tx desc write back */
+#define TXGBE_CFG_TPH_RDESC 0x14F04 /* TPH conf for Rx desc write back */
+#define TXGBE_CFG_TPH_RHDR 0x14F08 /* TPH conf for writing Rx pkt header */
+#define TXGBE_CFG_TPH_RPL 0x14F0C /* TPH conf for payload write access */
+/* TPH bit */
+#define TXGBE_CFG_TPH_TDESC_EN 0x80000000U
+#define TXGBE_CFG_TPH_TDESC_PH_SHIFT 29
+#define TXGBE_CFG_TPH_TDESC_ST_SHIFT 16
+#define TXGBE_CFG_TPH_RDESC_EN 0x80000000U
+#define TXGBE_CFG_TPH_RDESC_PH_SHIFT 29
+#define TXGBE_CFG_TPH_RDESC_ST_SHIFT 16
+#define TXGBE_CFG_TPH_RHDR_EN 0x00008000U
+#define TXGBE_CFG_TPH_RHDR_PH_SHIFT 13
+#define TXGBE_CFG_TPH_RHDR_ST_SHIFT 0
+#define TXGBE_CFG_TPH_RPL_EN 0x80000000U
+#define TXGBE_CFG_TPH_RPL_PH_SHIFT 29
+#define TXGBE_CFG_TPH_RPL_ST_SHIFT 16
+
+/*********************** Transmit DMA registers **************************/
+/* transmit global control */
+#define TXGBE_TDM_CTL 0x18000
+#define TXGBE_TDM_VF_TE(_i) (0x18004 + ((_i) * 4))
+#define TXGBE_TDM_PB_THRE(_i) (0x18020 + ((_i) * 4)) /* 8 of these 0 - 7 */
+#define TXGBE_TDM_LLQ(_i) (0x18040 + ((_i) * 4)) /* 4 of these (0-3) */
+#define TXGBE_TDM_ETYPE_LB_L 0x18050
+#define TXGBE_TDM_ETYPE_LB_H 0x18054
+#define TXGBE_TDM_ETYPE_AS_L 0x18058
+#define TXGBE_TDM_ETYPE_AS_H 0x1805C
+#define TXGBE_TDM_MAC_AS_L 0x18060
+#define TXGBE_TDM_MAC_AS_H 0x18064
+#define TXGBE_TDM_VLAN_AS_L 0x18070
+#define TXGBE_TDM_VLAN_AS_H 0x18074
+#define TXGBE_TDM_TCP_FLG_L 0x18078
+#define TXGBE_TDM_TCP_FLG_H 0x1807C
+#define TXGBE_TDM_VLAN_INS(_i) (0x18100 + ((_i) * 4)) /* 64 of these 0 - 63 */
+/* TDM CTL BIT */
+#define TXGBE_TDM_CTL_TE 0x1 /* Transmit Enable */
+#define TXGBE_TDM_CTL_PADDING 0x2 /* Padding byte number for ipsec ESP */
+#define TXGBE_TDM_CTL_VT_SHIFT 16 /* VLAN EtherType */
+/* Per VF Port VLAN insertion rules */
+#define TXGBE_TDM_VLAN_INS_VLANA_DEFAULT 0x40000000U /*Always use default VLAN*/
+#define TXGBE_TDM_VLAN_INS_VLANA_NEVER 0x80000000U /* Never insert VLAN tag */
+
+#define TXGBE_TDM_RP_CTL 0x18400
+#define TXGBE_TDM_RP_CTL_RST ((0x1) << 0)
+#define TXGBE_TDM_RP_CTL_RPEN ((0x1) << 2)
+#define TXGBE_TDM_RP_CTL_RLEN ((0x1) << 3)
+#define TXGBE_TDM_RP_IDX 0x1820C
+#define TXGBE_TDM_RP_RATE 0x18404
+#define TXGBE_TDM_RP_RATE_MIN(v) ((0x3FFF & (v)))
+#define TXGBE_TDM_RP_RATE_MAX(v) ((0x3FFF & (v)) << 16)
+
+/* qos */
+#define TXGBE_TDM_PBWARB_CTL 0x18200
+#define TXGBE_TDM_PBWARB_CFG(_i) (0x18220 + ((_i) * 4)) /* 8 of these (0-7) */
+#define TXGBE_TDM_MMW 0x18208
+#define TXGBE_TDM_VM_CREDIT(_i) (0x18500 + ((_i) * 4))
+#define TXGBE_TDM_VM_CREDIT_VAL(v) (0x3FF & (v))
+/* fcoe */
+#define TXGBE_TDM_FC_EOF 0x18384
+#define TXGBE_TDM_FC_SOF 0x18380
+/* etag */
+#define TXGBE_TDM_ETAG_INS(_i) (0x18700 + ((_i) * 4)) /* 64 of these 0 - 63 */
+/* statistic */
+#define TXGBE_TDM_SEC_DRP 0x18304
+#define TXGBE_TDM_PKT_CNT 0x18308
+#define TXGBE_TDM_OS2BMC_CNT 0x18314
+
+/**************************** Receive DMA registers **************************/
+/* receive control */
+#define TXGBE_RDM_ARB_CTL 0x12000
+#define TXGBE_RDM_VF_RE(_i) (0x12004 + ((_i) * 4))
+#define TXGBE_RDM_RSC_CTL 0x1200C
+#define TXGBE_RDM_ARB_CFG(_i) (0x12040 + ((_i) * 4)) /* 8 of these (0-7) */
+#define TXGBE_RDM_PF_QDE(_i) (0x12080 + ((_i) * 4))
+#define TXGBE_RDM_PF_HIDE(_i) (0x12090 + ((_i) * 4))
+/* VFRE bitmask */
+#define TXGBE_RDM_VF_RE_ENABLE_ALL 0xFFFFFFFFU
+
+/* FCoE DMA Context Registers */
+#define TXGBE_RDM_FCPTRL 0x12410
+#define TXGBE_RDM_FCPTRH 0x12414
+#define TXGBE_RDM_FCBUF 0x12418
+#define TXGBE_RDM_FCBUF_VALID ((0x1)) /* DMA Context Valid */
+#define TXGBE_RDM_FCBUF_SIZE(_v) (((_v) & 0x3) << 3) /* User Buffer Size */
+#define TXGBE_RDM_FCBUF_COUNT(_v) (((_v) & 0xFF) << 8) /* Num of User Buf */
+#define TXGBE_RDM_FCBUF_OFFSET(_v) (((_v) & 0xFFFF) << 16) /* User Buf Offset*/
+#define TXGBE_RDM_FCRW 0x12420
+#define TXGBE_RDM_FCRW_FCSEL(_v) (((_v) & 0x1FF)) /* FC X_ID: 11 bits */
+#define TXGBE_RDM_FCRW_WE ((0x1) << 14) /* Write enable */
+#define TXGBE_RDM_FCRW_RE ((0x1) << 15) /* Read enable */
+#define TXGBE_RDM_FCRW_LASTSIZE(_v) (((_v) & 0xFFFF) << 16)
+
+/* statistic */
+#define TXGBE_RDM_DRP_PKT 0x12500
+#define TXGBE_RDM_BMC2OS_CNT 0x12510
+
+/***************************** RDB registers *********************************/
+/* Flow Control Registers */
+#define TXGBE_RDB_RFCV(_i) (0x19200 + ((_i) * 4)) /* 4 of these (0-3)*/
+#define TXGBE_RDB_RFCL(_i) (0x19220 + ((_i) * 4)) /* 8 of these (0-7)*/
+#define TXGBE_RDB_RFCH(_i) (0x19260 + ((_i) * 4)) /* 8 of these (0-7)*/
+#define TXGBE_RDB_RFCRT 0x192A0
+#define TXGBE_RDB_RFCC 0x192A4
+/* receive packet buffer */
+#define TXGBE_RDB_PB_WRAP 0x19004
+#define TXGBE_RDB_PB_SZ(_i) (0x19020 + ((_i) * 4))
+#define TXGBE_RDB_PB_CTL 0x19000
+#define TXGBE_RDB_UP2TC 0x19008
+#define TXGBE_RDB_PB_SZ_SHIFT 10
+#define TXGBE_RDB_PB_SZ_MASK 0x000FFC00U
+/* lli interrupt */
+#define TXGBE_RDB_LLI_THRE 0x19080
+#define TXGBE_RDB_LLI_THRE_SZ(_v) ((0xFFF & (_v)))
+#define TXGBE_RDB_LLI_THRE_UP(_v) ((0x7 & (_v)) << 16)
+#define TXGBE_RDB_LLI_THRE_UP_SHIFT 16
+
+/* ring assignment */
+#define TXGBE_RDB_PL_CFG(_i) (0x19300 + ((_i) * 4))
+#define TXGBE_RDB_RSSTBL(_i) (0x19400 + ((_i) * 4))
+#define TXGBE_RDB_RSSRK(_i) (0x19480 + ((_i) * 4))
+#define TXGBE_RDB_RSS_TC 0x194F0
+#define TXGBE_RDB_RA_CTL 0x194F4
+#define TXGBE_RDB_5T_SA(_i) (0x19600 + ((_i) * 4)) /* Src Addr Q Filter */
+#define TXGBE_RDB_5T_DA(_i) (0x19800 + ((_i) * 4)) /* Dst Addr Q Filter */
+#define TXGBE_RDB_5T_SDP(_i) (0x19A00 + ((_i) * 4)) /*Src Dst Addr Q Filter*/
+#define TXGBE_RDB_5T_CTL0(_i) (0x19C00 + ((_i) * 4)) /* Five Tuple Q Filter */
+#define TXGBE_RDB_ETYPE_CLS(_i) (0x19100 + ((_i) * 4)) /* EType Q Select */
+#define TXGBE_RDB_SYN_CLS 0x19130
+#define TXGBE_RDB_5T_CTL1(_i) (0x19E00 + ((_i) * 4)) /*128 of these (0-127)*/
+/* Flow Director registers */
+#define TXGBE_RDB_FDIR_CTL 0x19500
+#define TXGBE_RDB_FDIR_HKEY 0x19568
+#define TXGBE_RDB_FDIR_SKEY 0x1956C
+#define TXGBE_RDB_FDIR_DA4_MSK 0x1953C
+#define TXGBE_RDB_FDIR_SA4_MSK 0x19540
+#define TXGBE_RDB_FDIR_TCP_MSK 0x19544
+#define TXGBE_RDB_FDIR_UDP_MSK 0x19548
+#define TXGBE_RDB_FDIR_SCTP_MSK 0x19560
+#define TXGBE_RDB_FDIR_IP6_MSK 0x19574
+#define TXGBE_RDB_FDIR_OTHER_MSK 0x19570
+#define TXGBE_RDB_FDIR_FLEX_CFG(_i) (0x19580 + ((_i) * 4))
+/* Flow Director Stats registers */
+#define TXGBE_RDB_FDIR_FREE 0x19538
+#define TXGBE_RDB_FDIR_LEN 0x1954C
+#define TXGBE_RDB_FDIR_USE_ST 0x19550
+#define TXGBE_RDB_FDIR_FAIL_ST 0x19554
+#define TXGBE_RDB_FDIR_MATCH 0x19558
+#define TXGBE_RDB_FDIR_MISS 0x1955C
+/* Flow Director Programming registers */
+#define TXGBE_RDB_FDIR_IP6(_i) (0x1950C + ((_i) * 4)) /* 3 of these (0-2)*/
+#define TXGBE_RDB_FDIR_SA 0x19518
+#define TXGBE_RDB_FDIR_DA 0x1951C
+#define TXGBE_RDB_FDIR_PORT 0x19520
+#define TXGBE_RDB_FDIR_FLEX 0x19524
+#define TXGBE_RDB_FDIR_HASH 0x19528
+#define TXGBE_RDB_FDIR_CMD 0x1952C
+/* VM RSS */
+#define TXGBE_RDB_VMRSSRK(_i, _p) (0x1A000 + ((_i) * 4) + ((_p) * 0x40))
+#define TXGBE_RDB_VMRSSTBL(_i, _p) (0x1B000 + ((_i) * 4) + ((_p) * 0x40))
+/* FCoE Redirection */
+#define TXGBE_RDB_FCRE_TBL_SIZE (8) /* Max entries in FCRETA */
+#define TXGBE_RDB_FCRE_CTL 0x19140
+#define TXGBE_RDB_FCRE_CTL_ENA ((0x1)) /* FCoE Redir Table Enable */
+#define TXGBE_RDB_FCRE_TBL(_i) (0x19160 + ((_i) * 4))
+#define TXGBE_RDB_FCRE_TBL_RING(_v) (((_v) & 0x7F)) /* output queue number */
+/* statistic */
+#define TXGBE_RDB_MPCNT(_i) (0x19040 + ((_i) * 4)) /* 8 of 3FA0-3FBC*/
+#define TXGBE_RDB_LXONTXC 0x1921C
+#define TXGBE_RDB_LXOFFTXC 0x19218
+#define TXGBE_RDB_PXON2OFFCNT(_i) (0x19280 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_RDB_PXONTXC(_i) (0x192E0 + ((_i) * 4)) /* 8 of 3F00-3F1C*/
+#define TXGBE_RDB_PXOFFTXC(_i) (0x192C0 + ((_i) * 4)) /* 8 of 3F20-3F3C*/
+#define TXGBE_RDB_PFCMACDAL 0x19210
+#define TXGBE_RDB_PFCMACDAH 0x19214
+#define TXGBE_RDB_TXSWERR 0x1906C
+#define TXGBE_RDB_TXSWERR_TB_FREE 0x3FF
+/* rdb_pl_cfg reg mask */
+#define TXGBE_RDB_PL_CFG_L4HDR 0x2
+#define TXGBE_RDB_PL_CFG_L3HDR 0x4
+#define TXGBE_RDB_PL_CFG_L2HDR 0x8
+#define TXGBE_RDB_PL_CFG_TUN_OUTER_L2HDR 0x20
+#define TXGBE_RDB_PL_CFG_TUN_TUNHDR 0x10
+#define TXGBE_RDB_PL_CFG_RSS_PL_MASK 0x7
+#define TXGBE_RDB_PL_CFG_RSS_PL_SHIFT 29
+/* RQTC Bit Masks and Shifts */
+#define TXGBE_RDB_RSS_TC_SHIFT_TC(_i) ((_i) * 4)
+#define TXGBE_RDB_RSS_TC_TC0_MASK (0x7 << 0)
+#define TXGBE_RDB_RSS_TC_TC1_MASK (0x7 << 4)
+#define TXGBE_RDB_RSS_TC_TC2_MASK (0x7 << 8)
+#define TXGBE_RDB_RSS_TC_TC3_MASK (0x7 << 12)
+#define TXGBE_RDB_RSS_TC_TC4_MASK (0x7 << 16)
+#define TXGBE_RDB_RSS_TC_TC5_MASK (0x7 << 20)
+#define TXGBE_RDB_RSS_TC_TC6_MASK (0x7 << 24)
+#define TXGBE_RDB_RSS_TC_TC7_MASK (0x7 << 28)
+/* Packet Buffer Initialization */
+#define TXGBE_MAX_PACKET_BUFFERS 8
+#define TXGBE_RDB_PB_SZ_48KB 0x00000030U /* 48KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_64KB 0x00000040U /* 64KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_80KB 0x00000050U /* 80KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_128KB 0x00000080U /* 128KB Packet Buffer */
+#define TXGBE_RDB_PB_SZ_MAX 0x00000200U /* 512KB Packet Buffer */
+
+
+/* Packet buffer allocation strategies */
+enum {
+ PBA_STRATEGY_EQUAL = 0, /* Distribute PB space equally */
+#define PBA_STRATEGY_EQUAL PBA_STRATEGY_EQUAL
+ PBA_STRATEGY_WEIGHTED = 1, /* Weight front half of TCs */
+#define PBA_STRATEGY_WEIGHTED PBA_STRATEGY_WEIGHTED
+};
+
+
+/* FCRTL Bit Masks */
+#define TXGBE_RDB_RFCL_XONE 0x80000000U /* XON enable */
+#define TXGBE_RDB_RFCH_XOFFE 0x80000000U /* Packet buffer fc enable */
+/* FCCFG Bit Masks */
+#define TXGBE_RDB_RFCC_RFCE_802_3X 0x00000008U /* Tx link FC enable */
+#define TXGBE_RDB_RFCC_RFCE_PRIORITY 0x00000010U /* Tx priority FC enable */
+
+/* Immediate Interrupt Rx (A.K.A. Low Latency Interrupt) */
+#define TXGBE_RDB_5T_CTL1_SIZE_BP 0x00001000U /* Packet size bypass */
+#define TXGBE_RDB_5T_CTL1_LLI 0x00100000U /* Enables low latency Int */
+#define TXGBE_RDB_LLI_THRE_PRIORITY_MASK 0x00070000U /* VLAN priority mask */
+#define TXGBE_RDB_LLI_THRE_PRIORITY_EN 0x00080000U /* VLAN priority enable */
+#define TXGBE_RDB_LLI_THRE_CMN_EN 0x00100000U /* cmn packet receiveed */
+
+#define TXGBE_MAX_RDB_5T_CTL0_FILTERS 128
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_MASK 0x00000003U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_TCP 0x00000000U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_UDP 0x00000001U
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_SCTP 2
+#define TXGBE_RDB_5T_CTL0_PRIORITY_MASK 0x00000007U
+#define TXGBE_RDB_5T_CTL0_PRIORITY_SHIFT 2
+#define TXGBE_RDB_5T_CTL0_POOL_MASK 0x0000003FU
+#define TXGBE_RDB_5T_CTL0_POOL_SHIFT 8
+#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_MASK 0x0000001FU
+#define TXGBE_RDB_5T_CTL0_5TUPLE_MASK_SHIFT 25
+#define TXGBE_RDB_5T_CTL0_SOURCE_ADDR_MASK 0x1E
+#define TXGBE_RDB_5T_CTL0_DEST_ADDR_MASK 0x1D
+#define TXGBE_RDB_5T_CTL0_SOURCE_PORT_MASK 0x1B
+#define TXGBE_RDB_5T_CTL0_DEST_PORT_MASK 0x17
+#define TXGBE_RDB_5T_CTL0_PROTOCOL_COMP_MASK 0x0F
+#define TXGBE_RDB_5T_CTL0_POOL_MASK_EN 0x40000000U
+#define TXGBE_RDB_5T_CTL0_QUEUE_ENABLE 0x80000000U
+
+#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE 0x007F0000U /* bits 22:16 */
+#define TXGBE_RDB_ETYPE_CLS_RX_QUEUE_SHIFT 16
+#define TXGBE_RDB_ETYPE_CLS_LLI 0x20000000U /* bit 29 */
+#define TXGBE_RDB_ETYPE_CLS_QUEUE_EN 0x80000000U /* bit 31 */
+
+/* Receive Config masks */
+#define TXGBE_RDB_PB_CTL_RXEN (0x80000000) /* Enable Receiver */
+#define TXGBE_RDB_PB_CTL_DISABLED 0x1
+
+#define TXGBE_RDB_RA_CTL_RSS_EN 0x00000004U /* RSS Enable */
+#define TXGBE_RDB_RA_CTL_RSS_MASK 0xFFFF0000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4_TCP 0x00010000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4 0x00020000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6 0x00100000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6_TCP 0x00200000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV4_UDP 0x00400000U
+#define TXGBE_RDB_RA_CTL_RSS_IPV6_UDP 0x00800000U
+
+enum txgbe_fdir_pballoc_type {
+ TXGBE_FDIR_PBALLOC_NONE = 0,
+ TXGBE_FDIR_PBALLOC_64K = 1,
+ TXGBE_FDIR_PBALLOC_128K = 2,
+ TXGBE_FDIR_PBALLOC_256K = 3,
+};
+
+/* Flow Director register values */
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_64K 0x00000001U
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_128K 0x00000002U
+#define TXGBE_RDB_FDIR_CTL_PBALLOC_256K 0x00000003U
+#define TXGBE_RDB_FDIR_CTL_INIT_DONE 0x00000008U
+#define TXGBE_RDB_FDIR_CTL_PERFECT_MATCH 0x00000010U
+#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS 0x00000020U
+#define TXGBE_RDB_FDIR_CTL_REPORT_STATUS_ALWAYS 0x00000080U
+#define TXGBE_RDB_FDIR_CTL_DROP_Q_SHIFT 8
+#define TXGBE_RDB_FDIR_CTL_FILTERMODE_SHIFT 21
+#define TXGBE_RDB_FDIR_CTL_MAX_LENGTH_SHIFT 24
+#define TXGBE_RDB_FDIR_CTL_HASH_BITS_SHIFT 20
+#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_MASK 0xF0000000U
+#define TXGBE_RDB_FDIR_CTL_FULL_THRESH_SHIFT 28
+
+
+#define TXGBE_RDB_FDIR_TCP_MSK_DPORTM_SHIFT 16
+#define TXGBE_RDB_FDIR_UDP_MSK_DPORTM_SHIFT 16
+#define TXGBE_RDB_FDIR_IP6_MSK_DIPM_SHIFT 16
+#define TXGBE_RDB_FDIR_OTHER_MSK_POOL 0x00000004U
+#define TXGBE_RDB_FDIR_OTHER_MSK_L4P 0x00000008U
+#define TXGBE_RDB_FDIR_OTHER_MSK_L3P 0x00000010U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_TYPE 0x00000020U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN_OUTIP 0x00000040U
+#define TXGBE_RDB_FDIR_OTHER_MSK_TUN 0x00000080U
+
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MAC 0x00000000U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_IP 0x00000001U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_HDR 0x00000002U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_L4_PAYLOAD 0x00000003U
+#define TXGBE_RDB_FDIR_FLEX_CFG_BASE_MSK 0x00000003U
+#define TXGBE_RDB_FDIR_FLEX_CFG_MSK 0x00000004U
+#define TXGBE_RDB_FDIR_FLEX_CFG_OFST 0x000000F8U
+#define TXGBE_RDB_FDIR_FLEX_CFG_OFST_SHIFT 3
+#define TXGBE_RDB_FDIR_FLEX_CFG_VM_SHIFT 8
+
+#define TXGBE_RDB_FDIR_PORT_DESTINATION_SHIFT 16
+#define TXGBE_RDB_FDIR_FLEX_FLEX_SHIFT 16
+#define TXGBE_RDB_FDIR_HASH_BUCKET_VALID_SHIFT 15
+#define TXGBE_RDB_FDIR_HASH_SIG_SW_INDEX_SHIFT 16
+
+#define TXGBE_RDB_FDIR_CMD_CMD_MASK 0x00000003U
+#define TXGBE_RDB_FDIR_CMD_CMD_ADD_FLOW 0x00000001U
+#define TXGBE_RDB_FDIR_CMD_CMD_REMOVE_FLOW 0x00000002U
+#define TXGBE_RDB_FDIR_CMD_CMD_QUERY_REM_FILT 0x00000003U
+#define TXGBE_RDB_FDIR_CMD_FILTER_VALID 0x00000004U
+#define TXGBE_RDB_FDIR_CMD_FILTER_UPDATE 0x00000008U
+#define TXGBE_RDB_FDIR_CMD_IPv6DMATCH 0x00000010U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_UDP 0x00000020U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_TCP 0x00000040U
+#define TXGBE_RDB_FDIR_CMD_L4TYPE_SCTP 0x00000060U
+#define TXGBE_RDB_FDIR_CMD_IPV6 0x00000080U
+#define TXGBE_RDB_FDIR_CMD_CLEARHT 0x00000100U
+#define TXGBE_RDB_FDIR_CMD_DROP 0x00000200U
+#define TXGBE_RDB_FDIR_CMD_INT 0x00000400U
+#define TXGBE_RDB_FDIR_CMD_LAST 0x00000800U
+#define TXGBE_RDB_FDIR_CMD_COLLISION 0x00001000U
+#define TXGBE_RDB_FDIR_CMD_QUEUE_EN 0x00008000U
+#define TXGBE_RDB_FDIR_CMD_FLOW_TYPE_SHIFT 5
+#define TXGBE_RDB_FDIR_CMD_RX_QUEUE_SHIFT 16
+#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER_SHIFT 23
+#define TXGBE_RDB_FDIR_CMD_VT_POOL_SHIFT 24
+#define TXGBE_RDB_FDIR_INIT_DONE_POLL 10
+#define TXGBE_RDB_FDIR_CMD_CMD_POLL 10
+#define TXGBE_RDB_FDIR_CMD_TUNNEL_FILTER 0x00800000U
+#define TXGBE_RDB_FDIR_DROP_QUEUE 127
+#define TXGBE_FDIR_INIT_DONE_POLL 10
+
+/******************************* PSR Registers *******************************/
+/* psr control */
+#define TXGBE_PSR_CTL 0x15000
+#define TXGBE_PSR_VLAN_CTL 0x15088
+#define TXGBE_PSR_VM_CTL 0x151B0
+/* Header split receive */
+#define TXGBE_PSR_CTL_SW_EN 0x00040000U
+#define TXGBE_PSR_CTL_RSC_DIS 0x00010000U
+#define TXGBE_PSR_CTL_RSC_ACK 0x00020000U
+#define TXGBE_PSR_CTL_PCSD 0x00002000U
+#define TXGBE_PSR_CTL_IPPCSE 0x00001000U
+#define TXGBE_PSR_CTL_BAM 0x00000400U
+#define TXGBE_PSR_CTL_UPE 0x00000200U
+#define TXGBE_PSR_CTL_MPE 0x00000100U
+#define TXGBE_PSR_CTL_MFE 0x00000080U
+#define TXGBE_PSR_CTL_MO 0x00000060U
+#define TXGBE_PSR_CTL_TPE 0x00000010U
+#define TXGBE_PSR_CTL_MO_SHIFT 5
+/* VT_CTL bitmasks */
+#define TXGBE_PSR_VM_CTL_DIS_DEFPL 0x20000000U /* disable default pool */
+#define TXGBE_PSR_VM_CTL_REPLEN 0x40000000U /* replication enabled */
+#define TXGBE_PSR_VM_CTL_POOL_SHIFT 7
+#define TXGBE_PSR_VM_CTL_POOL_MASK (0x3F << TXGBE_PSR_VM_CTL_POOL_SHIFT)
+/* VLAN Control Bit Masks */
+#define TXGBE_PSR_VLAN_CTL_VET 0x0000FFFFU /* bits 0-15 */
+#define TXGBE_PSR_VLAN_CTL_CFI 0x10000000U /* bit 28 */
+#define TXGBE_PSR_VLAN_CTL_CFIEN 0x20000000U /* bit 29 */
+#define TXGBE_PSR_VLAN_CTL_VFE 0x40000000U /* bit 30 */
+
+/* vm L2 contorl */
+#define TXGBE_PSR_VM_L2CTL(_i) (0x15600 + ((_i) * 4))
+/* VMOLR bitmasks */
+#define TXGBE_PSR_VM_L2CTL_LBDIS 0x00000002U /* disable loopback */
+#define TXGBE_PSR_VM_L2CTL_LLB 0x00000004U /* local pool loopback */
+#define TXGBE_PSR_VM_L2CTL_UPE 0x00000010U /* unicast promiscuous */
+#define TXGBE_PSR_VM_L2CTL_TPE 0x00000020U /* ETAG promiscuous */
+#define TXGBE_PSR_VM_L2CTL_VACC 0x00000040U /* accept nomatched vlan */
+#define TXGBE_PSR_VM_L2CTL_VPE 0x00000080U /* vlan promiscuous mode */
+#define TXGBE_PSR_VM_L2CTL_AUPE 0x00000100U /* accept untagged packets */
+#define TXGBE_PSR_VM_L2CTL_ROMPE 0x00000200U /*accept packets in MTA tbl*/
+#define TXGBE_PSR_VM_L2CTL_ROPE 0x00000400U /* accept packets in UC tbl*/
+#define TXGBE_PSR_VM_L2CTL_BAM 0x00000800U /* accept broadcast packets*/
+#define TXGBE_PSR_VM_L2CTL_MPE 0x00001000U /* multicast promiscuous */
+
+/* etype switcher 1st stage */
+#define TXGBE_PSR_ETYPE_SWC(_i) (0x15128 + ((_i) * 4)) /* EType Queue Filter */
+/* ETYPE Queue Filter/Select Bit Masks */
+#define TXGBE_MAX_PSR_ETYPE_SWC_FILTERS 8
+#define TXGBE_PSR_ETYPE_SWC_FCOE 0x08000000U /* bit 27 */
+#define TXGBE_PSR_ETYPE_SWC_TX_ANTISPOOF 0x20000000U /* bit 29 */
+#define TXGBE_PSR_ETYPE_SWC_1588 0x40000000U /* bit 30 */
+#define TXGBE_PSR_ETYPE_SWC_FILTER_EN 0x80000000U /* bit 31 */
+#define TXGBE_PSR_ETYPE_SWC_POOL_ENABLE (1 << 26) /* bit 26 */
+#define TXGBE_PSR_ETYPE_SWC_POOL_SHIFT 20
+/*
+ * ETQF filter list: one static filter per filter consumer. This is
+ * to avoid filter collisions later. Add new filters
+ * here!!
+ *
+ * Current filters:
+ * EAPOL 802.1x (0x888e): Filter 0
+ * FCoE (0x8906): Filter 2
+ * 1588 (0x88f7): Filter 3
+ * FIP (0x8914): Filter 4
+ * LLDP (0x88CC): Filter 5
+ * LACP (0x8809): Filter 6
+ * FC (0x8808): Filter 7
+ */
+#define TXGBE_PSR_ETYPE_SWC_FILTER_EAPOL 0
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FCOE 2
+#define TXGBE_PSR_ETYPE_SWC_FILTER_1588 3
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FIP 4
+#define TXGBE_PSR_ETYPE_SWC_FILTER_LLDP 5
+#define TXGBE_PSR_ETYPE_SWC_FILTER_LACP 6
+#define TXGBE_PSR_ETYPE_SWC_FILTER_FC 7
+
+/* mcasst/ucast overflow tbl */
+#define TXGBE_PSR_MC_TBL(_i) (0x15200 + ((_i) * 4))
+#define TXGBE_PSR_UC_TBL(_i) (0x15400 + ((_i) * 4))
+
+/* vlan tbl */
+#define TXGBE_PSR_VLAN_TBL(_i) (0x16000 + ((_i) * 4))
+
+/* mac switcher */
+#define TXGBE_PSR_MAC_SWC_AD_L 0x16200
+#define TXGBE_PSR_MAC_SWC_AD_H 0x16204
+#define TXGBE_PSR_MAC_SWC_VM_L 0x16208
+#define TXGBE_PSR_MAC_SWC_VM_H 0x1620C
+#define TXGBE_PSR_MAC_SWC_IDX 0x16210
+/* RAH */
+#define TXGBE_PSR_MAC_SWC_AD_H_AD(v) (((v) & 0xFFFF))
+#define TXGBE_PSR_MAC_SWC_AD_H_ADTYPE(v) (((v) & 0x1) << 30)
+#define TXGBE_PSR_MAC_SWC_AD_H_AV 0x80000000U
+#define TXGBE_CLEAR_VMDQ_ALL 0xFFFFFFFFU
+
+/* vlan switch */
+#define TXGBE_PSR_VLAN_SWC 0x16220
+#define TXGBE_PSR_VLAN_SWC_VM_L 0x16224
+#define TXGBE_PSR_VLAN_SWC_VM_H 0x16228
+#define TXGBE_PSR_VLAN_SWC_IDX 0x16230 /* 64 vlan entries */
+/* VLAN pool filtering masks */
+#define TXGBE_PSR_VLAN_SWC_VIEN 0x80000000U /* filter is valid */
+#define TXGBE_PSR_VLAN_SWC_ENTRIES 64
+#define TXGBE_PSR_VLAN_SWC_VLANID_MASK 0x00000FFFU
+#define TXGBE_ETHERNET_IEEE_VLAN_TYPE 0x8100 /* 802.1q protocol */
+
+/* cloud switch */
+#define TXGBE_PSR_CL_SWC_DST0 0x16240
+#define TXGBE_PSR_CL_SWC_DST1 0x16244
+#define TXGBE_PSR_CL_SWC_DST2 0x16248
+#define TXGBE_PSR_CL_SWC_DST3 0x1624c
+#define TXGBE_PSR_CL_SWC_KEY 0x16250
+#define TXGBE_PSR_CL_SWC_CTL 0x16254
+#define TXGBE_PSR_CL_SWC_VM_L 0x16258
+#define TXGBE_PSR_CL_SWC_VM_H 0x1625c
+#define TXGBE_PSR_CL_SWC_IDX 0x16260
+
+#define TXGBE_PSR_CL_SWC_CTL_VLD 0x80000000U
+#define TXGBE_PSR_CL_SWC_CTL_DST_MSK 0x00000002U
+#define TXGBE_PSR_CL_SWC_CTL_KEY_MSK 0x00000001U
+
+
+/* FCoE SOF/EOF */
+#define TXGBE_PSR_FC_EOF 0x15158
+#define TXGBE_PSR_FC_SOF 0x151F8
+/* FCoE Filter Context Registers */
+#define TXGBE_PSR_FC_FLT_CTXT 0x15108
+#define TXGBE_PSR_FC_FLT_CTXT_VALID ((0x1)) /* Filter Context Valid */
+#define TXGBE_PSR_FC_FLT_CTXT_FIRST ((0x1) << 1) /* Filter First */
+#define TXGBE_PSR_FC_FLT_CTXT_WR ((0x1) << 2) /* Write/Read Context */
+#define TXGBE_PSR_FC_FLT_CTXT_SEQID(_v) (((_v) & 0xFF) << 8) /* Sequence ID */
+#define TXGBE_PSR_FC_FLT_CTXT_SEQCNT(_v) (((_v) & 0xFFFF) << 16) /* Seq Count */
+
+#define TXGBE_PSR_FC_FLT_RW 0x15110
+#define TXGBE_PSR_FC_FLT_RW_FCSEL(_v) (((_v) & 0x1FF)) /* FC OX_ID: 11 bits */
+#define TXGBE_PSR_FC_FLT_RW_RVALDT ((0x1) << 13) /* Fast Re-Validation */
+#define TXGBE_PSR_FC_FLT_RW_WE ((0x1) << 14) /* Write Enable */
+#define TXGBE_PSR_FC_FLT_RW_RE ((0x1) << 15) /* Read Enable */
+
+#define TXGBE_PSR_FC_PARAM 0x151D8
+
+/* FCoE Receive Control */
+#define TXGBE_PSR_FC_CTL 0x15100
+#define TXGBE_PSR_FC_CTL_FCOELLI ((0x1)) /* Low latency interrupt */
+#define TXGBE_PSR_FC_CTL_SAVBAD ((0x1) << 1) /* Save Bad Frames */
+#define TXGBE_PSR_FC_CTL_FRSTRDH ((0x1) << 2) /* EN 1st Read Header */
+#define TXGBE_PSR_FC_CTL_LASTSEQH ((0x1) << 3) /* EN Last Header in Seq */
+#define TXGBE_PSR_FC_CTL_ALLH ((0x1) << 4) /* EN All Headers */
+#define TXGBE_PSR_FC_CTL_FRSTSEQH ((0x1) << 5) /* EN 1st Seq. Header */
+#define TXGBE_PSR_FC_CTL_ICRC ((0x1) << 6) /* Ignore Bad FC CRC */
+#define TXGBE_PSR_FC_CTL_FCCRCBO ((0x1) << 7) /* FC CRC Byte Ordering */
+#define TXGBE_PSR_FC_CTL_FCOEVER(_v) (((_v) & 0xF) << 8) /* FCoE Version */
+
+/* Management */
+#define TXGBE_PSR_MNG_FIT_CTL 0x15820
+/* Management Bit Fields and Masks */
+#define TXGBE_PSR_MNG_FIT_CTL_MPROXYE 0x40000000U /* Management Proxy Enable*/
+#define TXGBE_PSR_MNG_FIT_CTL_RCV_TCO_EN 0x00020000U /* Rcv TCO packet enable */
+#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS 0x10000000U /* Ena BMC2OS and OS2BMC
+ *traffic */
+#define TXGBE_PSR_MNG_FIT_CTL_EN_BMC2OS_SHIFT 28
+
+#define TXGBE_PSR_MNG_FLEX_SEL 0x1582C
+#define TXGBE_PSR_MNG_FLEX_DW_L(_i) (0x15A00 + ((_i) * 16))
+#define TXGBE_PSR_MNG_FLEX_DW_H(_i) (0x15A04 + ((_i) * 16))
+#define TXGBE_PSR_MNG_FLEX_MSK(_i) (0x15A08 + ((_i) * 16))
+
+/* mirror */
+#define TXGBE_PSR_MR_CTL(_i) (0x15B00 + ((_i) * 4))
+#define TXGBE_PSR_MR_VLAN_L(_i) (0x15B10 + ((_i) * 8))
+#define TXGBE_PSR_MR_VLAN_H(_i) (0x15B14 + ((_i) * 8))
+#define TXGBE_PSR_MR_VM_L(_i) (0x15B30 + ((_i) * 8))
+#define TXGBE_PSR_MR_VM_H(_i) (0x15B34 + ((_i) * 8))
+
+/* 1588 */
+#define TXGBE_PSR_1588_CTL 0x15188 /* Rx Time Sync Control register - RW */
+#define TXGBE_PSR_1588_STMPL 0x151E8 /* Rx timestamp Low - RO */
+#define TXGBE_PSR_1588_STMPH 0x151A4 /* Rx timestamp High - RO */
+#define TXGBE_PSR_1588_ATTRL 0x151A0 /* Rx timestamp attribute low - RO */
+#define TXGBE_PSR_1588_ATTRH 0x151A8 /* Rx timestamp attribute high - RO */
+#define TXGBE_PSR_1588_MSGTYPE 0x15120 /* RX message type register low - RW */
+/* 1588 CTL Bit */
+#define TXGBE_PSR_1588_CTL_VALID 0x00000001U /* Rx timestamp valid */
+#define TXGBE_PSR_1588_CTL_TYPE_MASK 0x0000000EU /* Rx type mask */
+#define TXGBE_PSR_1588_CTL_TYPE_L2_V2 0x00
+#define TXGBE_PSR_1588_CTL_TYPE_L4_V1 0x02
+#define TXGBE_PSR_1588_CTL_TYPE_L2_L4_V2 0x04
+#define TXGBE_PSR_1588_CTL_TYPE_EVENT_V2 0x0A
+#define TXGBE_PSR_1588_CTL_ENABLED 0x00000010U /* Rx Timestamp enabled*/
+/* 1588 msg type bit */
+#define TXGBE_PSR_1588_MSGTYPE_V1_CTRLT_MASK 0x000000FFU
+#define TXGBE_PSR_1588_MSGTYPE_V1_SYNC_MSG 0x00
+#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_REQ_MSG 0x01
+#define TXGBE_PSR_1588_MSGTYPE_V1_FOLLOWUP_MSG 0x02
+#define TXGBE_PSR_1588_MSGTYPE_V1_DELAY_RESP_MSG 0x03
+#define TXGBE_PSR_1588_MSGTYPE_V1_MGMT_MSG 0x04
+#define TXGBE_PSR_1588_MSGTYPE_V2_MSGID_MASK 0x0000FF00U
+#define TXGBE_PSR_1588_MSGTYPE_V2_SYNC_MSG 0x0000
+#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_REQ_MSG 0x0100
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_REQ_MSG 0x0200
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_RESP_MSG 0x0300
+#define TXGBE_PSR_1588_MSGTYPE_V2_FOLLOWUP_MSG 0x0800
+#define TXGBE_PSR_1588_MSGTYPE_V2_DELAY_RESP_MSG 0x0900
+#define TXGBE_PSR_1588_MSGTYPE_V2_PDELAY_FOLLOWUP_MSG 0x0A00
+#define TXGBE_PSR_1588_MSGTYPE_V2_ANNOUNCE_MSG 0x0B00
+#define TXGBE_PSR_1588_MSGTYPE_V2_SIGNALLING_MSG 0x0C00
+#define TXGBE_PSR_1588_MSGTYPE_V2_MGMT_MSG 0x0D00
+
+/* Wake up registers */
+#define TXGBE_PSR_WKUP_CTL 0x15B80
+#define TXGBE_PSR_WKUP_IPV 0x15B84
+#define TXGBE_PSR_LAN_FLEX_SEL 0x15B8C
+#define TXGBE_PSR_WKUP_IP4TBL(_i) (0x15BC0 + ((_i) * 4))
+#define TXGBE_PSR_WKUP_IP6TBL(_i) (0x15BE0 + ((_i) * 4))
+#define TXGBE_PSR_LAN_FLEX_DW_L(_i) (0x15C00 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_DW_H(_i) (0x15C04 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_MSK(_i) (0x15C08 + ((_i) * 16))
+#define TXGBE_PSR_LAN_FLEX_CTL 0x15CFC
+/* Wake Up Filter Control Bit */
+#define TXGBE_PSR_WKUP_CTL_LNKC 0x00000001U /* Link Status Change Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_MAG 0x00000002U /* Magic Packet Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_EX 0x00000004U /* Directed Exact Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_MC 0x00000008U /* Directed Multicast Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_BC 0x00000010U /* Broadcast Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_ARP 0x00000020U /* ARP Request Packet Wakeup Enable*/
+#define TXGBE_PSR_WKUP_CTL_IPV4 0x00000040U /* Directed IPv4 Pkt Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_IPV6 0x00000080U /* Directed IPv6 Pkt Wakeup Enable */
+#define TXGBE_PSR_WKUP_CTL_IGNORE_TCO 0x00008000U /* Ignore WakeOn TCO pkts */
+#define TXGBE_PSR_WKUP_CTL_FLX0 0x00010000U /* Flexible Filter 0 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX1 0x00020000U /* Flexible Filter 1 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX2 0x00040000U /* Flexible Filter 2 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX3 0x00080000U /* Flexible Filter 3 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX4 0x00100000U /* Flexible Filter 4 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX5 0x00200000U /* Flexible Filter 5 Ena */
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS 0x000F0000U /* Mask for 4 flex filters */
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_6 0x003F0000U /* Mask for 6 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FLX_FILTERS_8 0x00FF0000U /* Mask for 8 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FW_RST_WK 0x80000000U /* Ena wake on FW reset
+ * assertion */
+/* Mask for Ext. flex filters */
+#define TXGBE_PSR_WKUP_CTL_EXT_FLX_FILTERS 0x00300000U
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS 0x000F00FFU /* Mask all 4 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_6 0x003F00FFU /* Mask all 6 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_ALL_FILTERS_8 0x00FF00FFU /* Mask all 8 flex filters*/
+#define TXGBE_PSR_WKUP_CTL_FLX_OFFSET 16 /* Offset to the Flex Filters bits*/
+
+#define TXGBE_PSR_MAX_SZ 0x15020
+
+/****************************** TDB ******************************************/
+#define TXGBE_TDB_RFCS 0x1CE00
+#define TXGBE_TDB_PB_SZ(_i) (0x1CC00 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_TDB_MNG_TC 0x1CD10
+#define TXGBE_TDB_PRB_CTL 0x17010
+#define TXGBE_TDB_PBRARB_CTL 0x1CD00
+#define TXGBE_TDB_UP2TC 0x1C800
+#define TXGBE_TDB_PBRARB_CFG(_i) (0x1CD20 + ((_i) * 4)) /* 8 of (0-7) */
+
+#define TXGBE_TDB_PB_SZ_20KB 0x00005000U /* 20KB Packet Buffer */
+#define TXGBE_TDB_PB_SZ_40KB 0x0000A000U /* 40KB Packet Buffer */
+#define TXGBE_TDB_PB_SZ_MAX 0x00028000U /* 160KB Packet Buffer */
+#define TXGBE_TXPKT_SIZE_MAX 0xA /* Max Tx Packet size */
+#define TXGBE_MAX_PB 8
+
+/****************************** TSEC *****************************************/
+/* Security Control Registers */
+#define TXGBE_TSC_CTL 0x1D000
+#define TXGBE_TSC_ST 0x1D004
+#define TXGBE_TSC_BUF_AF 0x1D008
+#define TXGBE_TSC_BUF_AE 0x1D00C
+#define TXGBE_TSC_PRB_CTL 0x1D010
+#define TXGBE_TSC_MIN_IFG 0x1D020
+/* Security Bit Fields and Masks */
+#define TXGBE_TSC_CTL_SECTX_DIS 0x00000001U
+#define TXGBE_TSC_CTL_TX_DIS 0x00000002U
+#define TXGBE_TSC_CTL_STORE_FORWARD 0x00000004U
+#define TXGBE_TSC_CTL_IV_MSK_EN 0x00000008U
+#define TXGBE_TSC_ST_SECTX_RDY 0x00000001U
+#define TXGBE_TSC_ST_OFF_DIS 0x00000002U
+#define TXGBE_TSC_ST_ECC_TXERR 0x00000004U
+
+/* LinkSec (MacSec) Registers */
+#define TXGBE_TSC_LSEC_CAP 0x1D200
+#define TXGBE_TSC_LSEC_CTL 0x1D204
+#define TXGBE_TSC_LSEC_SCI_L 0x1D208
+#define TXGBE_TSC_LSEC_SCI_H 0x1D20C
+#define TXGBE_TSC_LSEC_SA 0x1D210
+#define TXGBE_TSC_LSEC_PKTNUM0 0x1D214
+#define TXGBE_TSC_LSEC_PKTNUM1 0x1D218
+#define TXGBE_TSC_LSEC_KEY0(_n) 0x1D21C
+#define TXGBE_TSC_LSEC_KEY1(_n) 0x1D22C
+#define TXGBE_TSC_LSEC_UNTAG_PKT 0x1D23C
+#define TXGBE_TSC_LSEC_ENC_PKT 0x1D240
+#define TXGBE_TSC_LSEC_PROT_PKT 0x1D244
+#define TXGBE_TSC_LSEC_ENC_OCTET 0x1D248
+#define TXGBE_TSC_LSEC_PROT_OCTET 0x1D24C
+
+/* IpSec Registers */
+#define TXGBE_TSC_IPS_IDX 0x1D100
+#define TXGBE_TSC_IPS_IDX_WT 0x80000000U
+#define TXGBE_TSC_IPS_IDX_RD 0x40000000U
+#define TXGBE_TSC_IPS_IDX_SD_IDX 0x0U /* */
+#define TXGBE_TSC_IPS_IDX_EN 0x00000001U
+#define TXGBE_TSC_IPS_SALT 0x1D104
+#define TXGBE_TSC_IPS_KEY(i) (0x1D108 + ((i) * 4))
+
+/* 1588 */
+#define TXGBE_TSC_1588_CTL 0x1D400 /* Tx Time Sync Control reg */
+#define TXGBE_TSC_1588_STMPL 0x1D404 /* Tx timestamp value Low */
+#define TXGBE_TSC_1588_STMPH 0x1D408 /* Tx timestamp value High */
+#define TXGBE_TSC_1588_SYSTIML 0x1D40C /* System time register Low */
+#define TXGBE_TSC_1588_SYSTIMH 0x1D410 /* System time register High */
+#define TXGBE_TSC_1588_INC 0x1D414 /* Increment attributes reg */
+#define TXGBE_TSC_1588_INC_IV(v) (((v) & 0xFFFFFF))
+#define TXGBE_TSC_1588_INC_IP(v) (((v) & 0xFF) << 24)
+#define TXGBE_TSC_1588_INC_IVP(v, p) \
+ (((v) & 0xFFFFFF) | TXGBE_TSC_1588_INC_IP(p))
+
+#define TXGBE_TSC_1588_ADJL 0x1D418 /* Time Adjustment Offset reg Low */
+#define TXGBE_TSC_1588_ADJH 0x1D41C /* Time Adjustment Offset reg High*/
+/* 1588 fields */
+#define TXGBE_TSC_1588_CTL_VALID 0x00000001U /* Tx timestamp valid */
+#define TXGBE_TSC_1588_CTL_ENABLED 0x00000010U /* Tx timestamping enabled */
+
+
+/********************************* RSEC **************************************/
+/* general rsec */
+#define TXGBE_RSC_CTL 0x17000
+#define TXGBE_RSC_ST 0x17004
+/* general rsec fields */
+#define TXGBE_RSC_CTL_SECRX_DIS 0x00000001U
+#define TXGBE_RSC_CTL_RX_DIS 0x00000002U
+#define TXGBE_RSC_CTL_CRC_STRIP 0x00000004U
+#define TXGBE_RSC_CTL_IV_MSK_EN 0x00000008U
+#define TXGBE_RSC_CTL_SAVE_MAC_ERR 0x00000040U
+#define TXGBE_RSC_ST_RSEC_RDY 0x00000001U
+#define TXGBE_RSC_ST_RSEC_OFLD_DIS 0x00000002U
+#define TXGBE_RSC_ST_ECC_RXERR 0x00000004U
+
+/* link sec */
+#define TXGBE_RSC_LSEC_CAP 0x17200
+#define TXGBE_RSC_LSEC_CTL 0x17204
+#define TXGBE_RSC_LSEC_SCI_L 0x17208
+#define TXGBE_RSC_LSEC_SCI_H 0x1720C
+#define TXGBE_RSC_LSEC_SA0 0x17210
+#define TXGBE_RSC_LSEC_SA1 0x17214
+#define TXGBE_RSC_LSEC_PKNUM0 0x17218
+#define TXGBE_RSC_LSEC_PKNUM1 0x1721C
+#define TXGBE_RSC_LSEC_KEY0(_n) 0x17220
+#define TXGBE_RSC_LSEC_KEY1(_n) 0x17230
+#define TXGBE_RSC_LSEC_UNTAG_PKT 0x17240
+#define TXGBE_RSC_LSEC_DEC_OCTET 0x17244
+#define TXGBE_RSC_LSEC_VLD_OCTET 0x17248
+#define TXGBE_RSC_LSEC_BAD_PKT 0x1724C
+#define TXGBE_RSC_LSEC_NOSCI_PKT 0x17250
+#define TXGBE_RSC_LSEC_UNSCI_PKT 0x17254
+#define TXGBE_RSC_LSEC_UNCHK_PKT 0x17258
+#define TXGBE_RSC_LSEC_DLY_PKT 0x1725C
+#define TXGBE_RSC_LSEC_LATE_PKT 0x17260
+#define TXGBE_RSC_LSEC_OK_PKT(_n) 0x17264
+#define TXGBE_RSC_LSEC_INV_PKT(_n) 0x17274
+#define TXGBE_RSC_LSEC_BADSA_PKT 0x1727C
+#define TXGBE_RSC_LSEC_INVSA_PKT 0x17280
+
+/* ipsec */
+#define TXGBE_RSC_IPS_IDX 0x17100
+#define TXGBE_RSC_IPS_IDX_WT 0x80000000U
+#define TXGBE_RSC_IPS_IDX_RD 0x40000000U
+#define TXGBE_RSC_IPS_IDX_TB_IDX 0x0U /* */
+#define TXGBE_RSC_IPS_IDX_TB_IP 0x00000002U
+#define TXGBE_RSC_IPS_IDX_TB_SPI 0x00000004U
+#define TXGBE_RSC_IPS_IDX_TB_KEY 0x00000006U
+#define TXGBE_RSC_IPS_IDX_EN 0x00000001U
+#define TXGBE_RSC_IPS_IP(i) (0x17104 + ((i) * 4))
+#define TXGBE_RSC_IPS_SPI 0x17114
+#define TXGBE_RSC_IPS_IP_IDX 0x17118
+#define TXGBE_RSC_IPS_KEY(i) (0x1711C + ((i) * 4))
+#define TXGBE_RSC_IPS_SALT 0x1712C
+#define TXGBE_RSC_IPS_MODE 0x17130
+#define TXGBE_RSC_IPS_MODE_IPV6 0x00000010
+#define TXGBE_RSC_IPS_MODE_DEC 0x00000008
+#define TXGBE_RSC_IPS_MODE_ESP 0x00000004
+#define TXGBE_RSC_IPS_MODE_AH 0x00000002
+#define TXGBE_RSC_IPS_MODE_VALID 0x00000001
+
+/************************************** ETH PHY ******************************/
+#define TXGBE_XPCS_IDA_ADDR 0x13000
+#define TXGBE_XPCS_IDA_DATA 0x13004
+#define TXGBE_ETHPHY_IDA_ADDR 0x13008
+#define TXGBE_ETHPHY_IDA_DATA 0x1300C
+
+/************************************** MNG ********************************/
+#define TXGBE_MNG_FW_SM 0x1E000
+#define TXGBE_MNG_SW_SM 0x1E004
+#define TXGBE_MNG_SWFW_SYNC 0x1E008
+#define TXGBE_MNG_MBOX 0x1E100
+#define TXGBE_MNG_MBOX_CTL 0x1E044
+#define TXGBE_MNG_OS2BMC_CNT 0x1E094
+#define TXGBE_MNG_BMC2OS_CNT 0x1E090
+
+/* Firmware Semaphore Register */
+#define TXGBE_MNG_FW_SM_MODE_MASK 0xE
+#define TXGBE_MNG_FW_SM_TS_ENABLED 0x1
+/* SW Semaphore Register bitmasks */
+#define TXGBE_MNG_SW_SM_SM 0x00000001U /* software Semaphore */
+
+/* SW_FW_SYNC definitions */
+#define TXGBE_MNG_SWFW_SYNC_SW_PHY 0x0001
+#define TXGBE_MNG_SWFW_SYNC_SW_FLASH 0x0008
+#define TXGBE_MNG_SWFW_SYNC_SW_MB 0x0004
+
+#define TXGBE_MNG_MBOX_CTL_SWRDY 0x1
+#define TXGBE_MNG_MBOX_CTL_SWACK 0x2
+#define TXGBE_MNG_MBOX_CTL_FWRDY 0x4
+#define TXGBE_MNG_MBOX_CTL_FWACK 0x8
+
+/************************************* ETH MAC *****************************/
+#define TXGBE_MAC_TX_CFG 0x11000
+#define TXGBE_MAC_RX_CFG 0x11004
+#define TXGBE_MAC_PKT_FLT 0x11008
+#define TXGBE_MAC_PKT_FLT_PR (0x1) /* promiscuous mode */
+#define TXGBE_MAC_PKT_FLT_RA (0x80000000) /* receive all */
+#define TXGBE_MAC_WDG_TIMEOUT 0x1100C
+#define TXGBE_MAC_RX_FLOW_CTRL 0x11090
+#define TXGBE_MAC_ADDRESS0_HIGH 0x11300
+#define TXGBE_MAC_ADDRESS0_LOW 0x11304
+
+#define TXGBE_MAC_TX_CFG_TE 0x00000001U
+#define TXGBE_MAC_TX_CFG_SPEED_MASK 0x60000000U
+#define TXGBE_MAC_TX_CFG_SPEED_10G 0x00000000U
+#define TXGBE_MAC_TX_CFG_SPEED_1G 0x60000000U
+#define TXGBE_MAC_RX_CFG_RE 0x00000001U
+#define TXGBE_MAC_RX_CFG_JE 0x00000100U
+#define TXGBE_MAC_RX_CFG_LM 0x00000400U
+#define TXGBE_MAC_WDG_TIMEOUT_PWE 0x00000100U
+#define TXGBE_MAC_WDG_TIMEOUT_WTO_MASK 0x0000000FU
+#define TXGBE_MAC_WDG_TIMEOUT_WTO_DELTA 2
+
+#define TXGBE_MAC_RX_FLOW_CTRL_RFE 0x00000001U /* receive fc enable */
+#define TXGBE_MAC_RX_FLOW_CTRL_PFCE 0x00000100U /* pfc enable */
+
+#define TXGBE_MSCA 0x11200
+#define TXGBE_MSCA_RA(v) ((0xFFFF & (v)))
+#define TXGBE_MSCA_PA(v) ((0x1F & (v)) << 16)
+#define TXGBE_MSCA_DA(v) ((0x1F & (v)) << 21)
+#define TXGBE_MSCC 0x11204
+#define TXGBE_MSCC_DATA(v) ((0xFFFF & (v)))
+#define TXGBE_MSCC_CMD(v) ((0x3 & (v)) << 16)
+enum TXGBE_MSCA_CMD_value {
+ TXGBE_MSCA_CMD_RSV = 0,
+ TXGBE_MSCA_CMD_WRITE,
+ TXGBE_MSCA_CMD_POST_READ,
+ TXGBE_MSCA_CMD_READ,
+};
+#define TXGBE_MSCC_SADDR ((0x1U) << 18)
+#define TXGBE_MSCC_CR(v) ((0x8U & (v)) << 19)
+#define TXGBE_MSCC_BUSY ((0x1U) << 22)
+
+/* EEE registers */
+
+/* statistic */
+#define TXGBE_MAC_LXONRXC 0x11E0C
+#define TXGBE_MAC_LXOFFRXC 0x11988
+#define TXGBE_MAC_PXONRXC(_i) (0x11E30 + ((_i) * 4)) /* 8 of these */
+#define TXGBE_MAC_PXOFFRXC 0x119DC
+#define TXGBE_RX_BC_FRAMES_GOOD_LOW 0x11918
+#define TXGBE_RX_CRC_ERROR_FRAMES_LOW 0x11928
+#define TXGBE_RX_LEN_ERROR_FRAMES_LOW 0x11978
+#define TXGBE_RX_UNDERSIZE_FRAMES_GOOD 0x11938
+#define TXGBE_RX_OVERSIZE_FRAMES_GOOD 0x1193C
+#define TXGBE_RX_FRAME_CNT_GOOD_BAD_LOW 0x11900
+#define TXGBE_TX_FRAME_CNT_GOOD_BAD_LOW 0x1181C
+#define TXGBE_TX_MC_FRAMES_GOOD_LOW 0x1182C
+#define TXGBE_TX_BC_FRAMES_GOOD_LOW 0x11824
+#define TXGBE_MMC_CONTROL 0x11800
+#define TXGBE_MMC_CONTROL_RSTONRD 0x4 /* reset on read */
+#define TXGBE_MMC_CONTROL_UP 0x700
+
+
+/********************************* BAR registers ***************************/
+/* Interrupt Registers */
+#define TXGBE_BME_CTL 0x12020
+#define TXGBE_PX_MISC_IC 0x100
+#define TXGBE_PX_MISC_ICS 0x104
+#define TXGBE_PX_MISC_IEN 0x108
+#define TXGBE_PX_MISC_IVAR 0x4FC
+#define TXGBE_PX_GPIE 0x118
+#define TXGBE_PX_ISB_ADDR_L 0x160
+#define TXGBE_PX_ISB_ADDR_H 0x164
+#define TXGBE_PX_TCP_TIMER 0x170
+#define TXGBE_PX_ITRSEL 0x180
+#define TXGBE_PX_IC(_i) (0x120 + (_i) * 4)
+#define TXGBE_PX_ICS(_i) (0x130 + (_i) * 4)
+#define TXGBE_PX_IMS(_i) (0x140 + (_i) * 4)
+#define TXGBE_PX_IMC(_i) (0x150 + (_i) * 4)
+#define TXGBE_PX_IVAR(_i) (0x500 + (_i) * 4)
+#define TXGBE_PX_ITR(_i) (0x200 + (_i) * 4)
+#define TXGBE_PX_TRANSACTION_PENDING 0x168
+#define TXGBE_PX_INTA 0x110
+
+/* Interrupt register bitmasks */
+/* Extended Interrupt Cause Read */
+#define TXGBE_PX_MISC_IC_ETH_LKDN 0x00000100U /* eth link down */
+#define TXGBE_PX_MISC_IC_DEV_RST 0x00000400U /* device reset event */
+#define TXGBE_PX_MISC_IC_TIMESYNC 0x00000800U /* time sync */
+#define TXGBE_PX_MISC_IC_STALL 0x00001000U /* trans or recv path is
+ * stalled */
+#define TXGBE_PX_MISC_IC_LINKSEC 0x00002000U /* Tx LinkSec require key
+ * exchange */
+#define TXGBE_PX_MISC_IC_RX_MISS 0x00004000U /* Packet Buffer Overrun */
+#define TXGBE_PX_MISC_IC_FLOW_DIR 0x00008000U /* FDir Exception */
+#define TXGBE_PX_MISC_IC_I2C 0x00010000U /* I2C interrupt */
+#define TXGBE_PX_MISC_IC_ETH_EVENT 0x00020000U /* err reported by MAC except
+ * eth link down */
+#define TXGBE_PX_MISC_IC_ETH_LK 0x00040000U /* link up */
+#define TXGBE_PX_MISC_IC_ETH_AN 0x00080000U /* link auto-nego done */
+#define TXGBE_PX_MISC_IC_INT_ERR 0x00100000U /* integrity error */
+#define TXGBE_PX_MISC_IC_SPI 0x00200000U /* SPI interface */
+#define TXGBE_PX_MISC_IC_VF_MBOX 0x00800000U /* VF-PF message box */
+#define TXGBE_PX_MISC_IC_GPIO 0x04000000U /* GPIO interrupt */
+#define TXGBE_PX_MISC_IC_PCIE_REQ_ERR 0x08000000U /* pcie request error int */
+#define TXGBE_PX_MISC_IC_OVER_HEAT 0x10000000U /* overheat detection */
+#define TXGBE_PX_MISC_IC_PROBE_MATCH 0x20000000U /* probe match */
+#define TXGBE_PX_MISC_IC_MNG_HOST_MBOX 0x40000000U /* mng mailbox */
+#define TXGBE_PX_MISC_IC_TIMER 0x80000000U /* tcp timer */
+
+/* Extended Interrupt Cause Set */
+#define TXGBE_PX_MISC_ICS_ETH_LKDN 0x00000100U
+#define TXGBE_PX_MISC_ICS_DEV_RST 0x00000400U
+#define TXGBE_PX_MISC_ICS_TIMESYNC 0x00000800U
+#define TXGBE_PX_MISC_ICS_STALL 0x00001000U
+#define TXGBE_PX_MISC_ICS_LINKSEC 0x00002000U
+#define TXGBE_PX_MISC_ICS_RX_MISS 0x00004000U
+#define TXGBE_PX_MISC_ICS_FLOW_DIR 0x00008000U
+#define TXGBE_PX_MISC_ICS_I2C 0x00010000U
+#define TXGBE_PX_MISC_ICS_ETH_EVENT 0x00020000U
+#define TXGBE_PX_MISC_ICS_ETH_LK 0x00040000U
+#define TXGBE_PX_MISC_ICS_ETH_AN 0x00080000U
+#define TXGBE_PX_MISC_ICS_INT_ERR 0x00100000U
+#define TXGBE_PX_MISC_ICS_SPI 0x00200000U
+#define TXGBE_PX_MISC_ICS_VF_MBOX 0x00800000U
+#define TXGBE_PX_MISC_ICS_GPIO 0x04000000U
+#define TXGBE_PX_MISC_ICS_PCIE_REQ_ERR 0x08000000U
+#define TXGBE_PX_MISC_ICS_OVER_HEAT 0x10000000U
+#define TXGBE_PX_MISC_ICS_PROBE_MATCH 0x20000000U
+#define TXGBE_PX_MISC_ICS_MNG_HOST_MBOX 0x40000000U
+#define TXGBE_PX_MISC_ICS_TIMER 0x80000000U
+
+/* Extended Interrupt Enable Set */
+#define TXGBE_PX_MISC_IEN_ETH_LKDN 0x00000100U
+#define TXGBE_PX_MISC_IEN_DEV_RST 0x00000400U
+#define TXGBE_PX_MISC_IEN_TIMESYNC 0x00000800U
+#define TXGBE_PX_MISC_IEN_STALL 0x00001000U
+#define TXGBE_PX_MISC_IEN_LINKSEC 0x00002000U
+#define TXGBE_PX_MISC_IEN_RX_MISS 0x00004000U
+#define TXGBE_PX_MISC_IEN_FLOW_DIR 0x00008000U
+#define TXGBE_PX_MISC_IEN_I2C 0x00010000U
+#define TXGBE_PX_MISC_IEN_ETH_EVENT 0x00020000U
+#define TXGBE_PX_MISC_IEN_ETH_LK 0x00040000U
+#define TXGBE_PX_MISC_IEN_ETH_AN 0x00080000U
+#define TXGBE_PX_MISC_IEN_INT_ERR 0x00100000U
+#define TXGBE_PX_MISC_IEN_SPI 0x00200000U
+#define TXGBE_PX_MISC_IEN_VF_MBOX 0x00800000U
+#define TXGBE_PX_MISC_IEN_GPIO 0x04000000U
+#define TXGBE_PX_MISC_IEN_PCIE_REQ_ERR 0x08000000U
+#define TXGBE_PX_MISC_IEN_OVER_HEAT 0x10000000U
+#define TXGBE_PX_MISC_IEN_PROBE_MATCH 0x20000000U
+#define TXGBE_PX_MISC_IEN_MNG_HOST_MBOX 0x40000000U
+#define TXGBE_PX_MISC_IEN_TIMER 0x80000000U
+
+#define TXGBE_PX_MISC_IEN_MASK ( \
+ TXGBE_PX_MISC_IEN_ETH_LKDN| \
+ TXGBE_PX_MISC_IEN_DEV_RST | \
+ TXGBE_PX_MISC_IEN_ETH_EVENT | \
+ TXGBE_PX_MISC_IEN_ETH_LK | \
+ TXGBE_PX_MISC_IEN_ETH_AN | \
+ TXGBE_PX_MISC_IEN_INT_ERR | \
+ TXGBE_PX_MISC_IEN_VF_MBOX | \
+ TXGBE_PX_MISC_IEN_GPIO | \
+ TXGBE_PX_MISC_IEN_MNG_HOST_MBOX | \
+ TXGBE_PX_MISC_IEN_STALL | \
+ TXGBE_PX_MISC_IEN_PCIE_REQ_ERR | \
+ TXGBE_PX_MISC_IEN_TIMER)
+
+/* General purpose Interrupt Enable */
+#define TXGBE_PX_GPIE_MODEL 0x00000001U
+#define TXGBE_PX_GPIE_IMEN 0x00000002U
+#define TXGBE_PX_GPIE_LL_INTERVAL 0x000000F0U
+#define TXGBE_PX_GPIE_RSC_DELAY 0x00000700U
+
+/* Interrupt Vector Allocation Registers */
+#define TXGBE_PX_IVAR_REG_NUM 64
+#define TXGBE_PX_IVAR_ALLOC_VAL 0x80 /* Interrupt Allocation valid */
+
+#define TXGBE_MAX_INT_RATE 500000
+#define TXGBE_MIN_INT_RATE 980
+#define TXGBE_MAX_EITR 0x00000FF8U
+#define TXGBE_MIN_EITR 8
+#define TXGBE_PX_ITR_ITR_INT_MASK 0x00000FF8U
+#define TXGBE_PX_ITR_LLI_CREDIT 0x001f0000U
+#define TXGBE_PX_ITR_LLI_MOD 0x00008000U
+#define TXGBE_PX_ITR_CNT_WDIS 0x80000000U
+#define TXGBE_PX_ITR_ITR_CNT 0x0FE00000U
+
+/* transmit DMA Registers */
+#define TXGBE_PX_TR_BAL(_i) (0x03000 + ((_i) * 0x40))
+#define TXGBE_PX_TR_BAH(_i) (0x03004 + ((_i) * 0x40))
+#define TXGBE_PX_TR_WP(_i) (0x03008 + ((_i) * 0x40))
+#define TXGBE_PX_TR_RP(_i) (0x0300C + ((_i) * 0x40))
+#define TXGBE_PX_TR_CFG(_i) (0x03010 + ((_i) * 0x40))
+/* Transmit Config masks */
+#define TXGBE_PX_TR_CFG_ENABLE (1) /* Ena specific Tx Queue */
+#define TXGBE_PX_TR_CFG_TR_SIZE_SHIFT 1 /* tx desc number per ring */
+#define TXGBE_PX_TR_CFG_SWFLSH (1 << 26) /* Tx Desc. wr-bk flushing */
+#define TXGBE_PX_TR_CFG_WTHRESH_SHIFT 16 /* shift to WTHRESH bits */
+#define TXGBE_PX_TR_CFG_THRE_SHIFT 8
+
+
+#define TXGBE_PX_TR_RPn(q_per_pool, vf_number, vf_q_index) \
+ (TXGBE_PX_TR_RP((q_per_pool)*(vf_number) + (vf_q_index)))
+#define TXGBE_PX_TR_WPn(q_per_pool, vf_number, vf_q_index) \
+ (TXGBE_PX_TR_WP((q_per_pool)*(vf_number) + (vf_q_index)))
+
+/* Receive DMA Registers */
+#define TXGBE_PX_RR_BAL(_i) (0x01000 + ((_i) * 0x40))
+#define TXGBE_PX_RR_BAH(_i) (0x01004 + ((_i) * 0x40))
+#define TXGBE_PX_RR_WP(_i) (0x01008 + ((_i) * 0x40))
+#define TXGBE_PX_RR_RP(_i) (0x0100C + ((_i) * 0x40))
+#define TXGBE_PX_RR_CFG(_i) (0x01010 + ((_i) * 0x40))
+/* PX_RR_CFG bit definitions */
+#define TXGBE_PX_RR_CFG_RR_SIZE_SHIFT 1
+#define TXGBE_PX_RR_CFG_BSIZEPKT_SHIFT 2 /* so many KBs */
+#define TXGBE_PX_RR_CFG_BSIZEHDRSIZE_SHIFT 6 /* 64byte resolution (>> 6)
+ * + at bit 8 offset (<< 12)
+ * = (<< 6)
+ */
+#define TXGBE_PX_RR_CFG_DROP_EN 0x40000000U
+#define TXGBE_PX_RR_CFG_VLAN 0x80000000U
+#define TXGBE_PX_RR_CFG_RSC 0x20000000U
+#define TXGBE_PX_RR_CFG_CNTAG 0x10000000U
+#define TXGBE_PX_RR_CFG_RSC_CNT_MD 0x08000000U
+#define TXGBE_PX_RR_CFG_SPLIT_MODE 0x04000000U
+#define TXGBE_PX_RR_CFG_STALL 0x02000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_1 0x00000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_4 0x00800000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_8 0x01000000U
+#define TXGBE_PX_RR_CFG_MAX_RSCBUF_16 0x01800000U
+#define TXGBE_PX_RR_CFG_RR_THER 0x00070000U
+#define TXGBE_PX_RR_CFG_RR_THER_SHIFT 16
+
+#define TXGBE_PX_RR_CFG_RR_HDR_SZ 0x0000F000U
+#define TXGBE_PX_RR_CFG_RR_BUF_SZ 0x00000F00U
+#define TXGBE_PX_RR_CFG_RR_SZ 0x0000007EU
+#define TXGBE_PX_RR_CFG_RR_EN 0x00000001U
+
+/* statistic */
+#define TXGBE_PX_MPRC(_i) (0x1020 + ((_i) * 64))
+#define TXGBE_VX_GPRC(_i) (0x01014 + (0x40 * (_i)))
+#define TXGBE_VX_GPTC(_i) (0x03014 + (0x40 * (_i)))
+#define TXGBE_VX_GORC_LSB(_i) (0x01018 + (0x40 * (_i)))
+#define TXGBE_VX_GORC_MSB(_i) (0x0101C + (0x40 * (_i)))
+#define TXGBE_VX_GOTC_LSB(_i) (0x03018 + (0x40 * (_i)))
+#define TXGBE_VX_GOTC_MSB(_i) (0x0301C + (0x40 * (_i)))
+#define TXGBE_VX_MPRC(_i) (0x01020 + (0x40 * (_i)))
+
+#define TXGBE_PX_GPRC 0x12504
+#define TXGBE_PX_GPTC 0x18308
+
+#define TXGBE_PX_GORC_LSB 0x12508
+#define TXGBE_PX_GORC_MSB 0x1250C
+
+#define TXGBE_PX_GOTC_LSB 0x1830C
+#define TXGBE_PX_GOTC_MSB 0x18310
+
+/************************************* Stats registers ************************/
+#define TXGBE_FCCRC 0x15160 /* Num of Good Eth CRC w/ Bad FC CRC */
+#define TXGBE_FCOERPDC 0x12514 /* FCoE Rx Packets Dropped Count */
+#define TXGBE_FCLAST 0x12518 /* FCoE Last Error Count */
+#define TXGBE_FCOEPRC 0x15164 /* Number of FCoE Packets Received */
+#define TXGBE_FCOEDWRC 0x15168 /* Number of FCoE DWords Received */
+#define TXGBE_FCOEPTC 0x18318 /* Number of FCoE Packets Transmitted */
+#define TXGBE_FCOEDWTC 0x1831C /* Number of FCoE DWords Transmitted */
+
+/*************************** Flash region definition *************************/
+/* EEC Register */
+#define TXGBE_EEC_SK 0x00000001U /* EEPROM Clock */
+#define TXGBE_EEC_CS 0x00000002U /* EEPROM Chip Select */
+#define TXGBE_EEC_DI 0x00000004U /* EEPROM Data In */
+#define TXGBE_EEC_DO 0x00000008U /* EEPROM Data Out */
+#define TXGBE_EEC_FWE_MASK 0x00000030U /* FLASH Write Enable */
+#define TXGBE_EEC_FWE_DIS 0x00000010U /* Disable FLASH writes */
+#define TXGBE_EEC_FWE_EN 0x00000020U /* Enable FLASH writes */
+#define TXGBE_EEC_FWE_SHIFT 4
+#define TXGBE_EEC_REQ 0x00000040U /* EEPROM Access Request */
+#define TXGBE_EEC_GNT 0x00000080U /* EEPROM Access Grant */
+#define TXGBE_EEC_PRES 0x00000100U /* EEPROM Present */
+#define TXGBE_EEC_ARD 0x00000200U /* EEPROM Auto Read Done */
+#define TXGBE_EEC_FLUP 0x00800000U /* Flash update command */
+#define TXGBE_EEC_SEC1VAL 0x02000000U /* Sector 1 Valid */
+#define TXGBE_EEC_FLUDONE 0x04000000U /* Flash update done */
+/* EEPROM Addressing bits based on type (0-small, 1-large) */
+#define TXGBE_EEC_ADDR_SIZE 0x00000400U
+#define TXGBE_EEC_SIZE 0x00007800U /* EEPROM Size */
+#define TXGBE_EERD_MAX_ADDR 0x00003FFFU /* EERD alows 14 bits for addr. */
+
+#define TXGBE_EEC_SIZE_SHIFT 11
+#define TXGBE_EEPROM_WORD_SIZE_SHIFT 6
+#define TXGBE_EEPROM_OPCODE_BITS 8
+
+/* FLA Register */
+#define TXGBE_FLA_LOCKED 0x00000040U
+
+/* Part Number String Length */
+#define TXGBE_PBANUM_LENGTH 32
+
+/* Checksum and EEPROM pointers */
+#define TXGBE_PBANUM_PTR_GUARD 0xFAFA
+#define TXGBE_EEPROM_CHECKSUM 0x2F
+#define TXGBE_EEPROM_SUM 0xBABA
+#define TXGBE_ATLAS0_CONFIG_PTR 0x04
+#define TXGBE_PHY_PTR 0x04
+#define TXGBE_ATLAS1_CONFIG_PTR 0x05
+#define TXGBE_OPTION_ROM_PTR 0x05
+#define TXGBE_PCIE_GENERAL_PTR 0x06
+#define TXGBE_PCIE_CONFIG0_PTR 0x07
+#define TXGBE_PCIE_CONFIG1_PTR 0x08
+#define TXGBE_CORE0_PTR 0x09
+#define TXGBE_CORE1_PTR 0x0A
+#define TXGBE_MAC0_PTR 0x0B
+#define TXGBE_MAC1_PTR 0x0C
+#define TXGBE_CSR0_CONFIG_PTR 0x0D
+#define TXGBE_CSR1_CONFIG_PTR 0x0E
+#define TXGBE_PCIE_ANALOG_PTR 0x02
+#define TXGBE_SHADOW_RAM_SIZE 0x4000
+#define TXGBE_TXGBE_PCIE_GENERAL_SIZE 0x24
+#define TXGBE_PCIE_CONFIG_SIZE 0x08
+#define TXGBE_EEPROM_LAST_WORD 0x800
+#define TXGBE_FW_PTR 0x0F
+#define TXGBE_PBANUM0_PTR 0x05
+#define TXGBE_PBANUM1_PTR 0x06
+#define TXGBE_ALT_MAC_ADDR_PTR 0x37
+#define TXGBE_FREE_SPACE_PTR 0x3E
+#define TXGBE_SW_REGION_PTR 0x1C
+
+#define TXGBE_SAN_MAC_ADDR_PTR 0x18
+#define TXGBE_DEVICE_CAPS 0x1C
+#define TXGBE_EEPROM_VERSION_L 0x1D
+#define TXGBE_EEPROM_VERSION_H 0x1E
+#define TXGBE_ISCSI_BOOT_CONFIG 0x07
+
+#define TXGBE_SERIAL_NUMBER_MAC_ADDR 0x11
+#define TXGBE_MAX_MSIX_VECTORS_SAPPHIRE 0x40
+
+/* MSI-X capability fields masks */
+#define TXGBE_PCIE_MSIX_TBL_SZ_MASK 0x7FF
+
+/* Legacy EEPROM word offsets */
+#define TXGBE_ISCSI_BOOT_CAPS 0x0033
+#define TXGBE_ISCSI_SETUP_PORT_0 0x0030
+#define TXGBE_ISCSI_SETUP_PORT_1 0x0034
+
+/* EEPROM Commands - SPI */
+#define TXGBE_EEPROM_MAX_RETRY_SPI 5000 /* Max wait 5ms for RDY signal */
+#define TXGBE_EEPROM_STATUS_RDY_SPI 0x01
+#define TXGBE_EEPROM_READ_OPCODE_SPI 0x03 /* EEPROM read opcode */
+#define TXGBE_EEPROM_WRITE_OPCODE_SPI 0x02 /* EEPROM write opcode */
+#define TXGBE_EEPROM_A8_OPCODE_SPI 0x08 /* opcode bit-3 = addr bit-8 */
+#define TXGBE_EEPROM_WREN_OPCODE_SPI 0x06 /* EEPROM set Write Ena latch */
+/* EEPROM reset Write Enable latch */
+#define TXGBE_EEPROM_WRDI_OPCODE_SPI 0x04
+#define TXGBE_EEPROM_RDSR_OPCODE_SPI 0x05 /* EEPROM read Status reg */
+#define TXGBE_EEPROM_WRSR_OPCODE_SPI 0x01 /* EEPROM write Status reg */
+#define TXGBE_EEPROM_ERASE4K_OPCODE_SPI 0x20 /* EEPROM ERASE 4KB */
+#define TXGBE_EEPROM_ERASE64K_OPCODE_SPI 0xD8 /* EEPROM ERASE 64KB */
+#define TXGBE_EEPROM_ERASE256_OPCODE_SPI 0xDB /* EEPROM ERASE 256B */
+
+/* EEPROM Read Register */
+#define TXGBE_EEPROM_RW_REG_DATA 16 /* data offset in EEPROM read reg */
+#define TXGBE_EEPROM_RW_REG_DONE 2 /* Offset to READ done bit */
+#define TXGBE_EEPROM_RW_REG_START 1 /* First bit to start operation */
+#define TXGBE_EEPROM_RW_ADDR_SHIFT 2 /* Shift to the address bits */
+#define TXGBE_NVM_POLL_WRITE 1 /* Flag for polling for wr complete */
+#define TXGBE_NVM_POLL_READ 0 /* Flag for polling for rd complete */
+
+#define NVM_INIT_CTRL_3 0x38
+#define NVM_INIT_CTRL_3_LPLU 0x8
+#define NVM_INIT_CTRL_3_D10GMP_PORT0 0x40
+#define NVM_INIT_CTRL_3_D10GMP_PORT1 0x100
+
+#define TXGBE_ETH_LENGTH_OF_ADDRESS 6
+
+#define TXGBE_EEPROM_PAGE_SIZE_MAX 128
+#define TXGBE_EEPROM_RD_BUFFER_MAX_COUNT 256 /* words rd in burst */
+#define TXGBE_EEPROM_WR_BUFFER_MAX_COUNT 256 /* words wr in burst */
+#define TXGBE_EEPROM_CTRL_2 1 /* EEPROM CTRL word 2 */
+#define TXGBE_EEPROM_CCD_BIT 2
+
+#ifndef TXGBE_EEPROM_GRANT_ATTEMPTS
+#define TXGBE_EEPROM_GRANT_ATTEMPTS 1000 /* EEPROM attempts to gain grant */
+#endif
+
+#ifndef TXGBE_EERD_EEWR_ATTEMPTS
+/* Number of 5 microseconds we wait for EERD read and
+ * EERW write to complete */
+#define TXGBE_EERD_EEWR_ATTEMPTS 100000
+#endif
+
+#ifndef TXGBE_FLUDONE_ATTEMPTS
+/* # attempts we wait for flush update to complete */
+#define TXGBE_FLUDONE_ATTEMPTS 20000
+#endif
+
+#define TXGBE_PCIE_CTRL2 0x5 /* PCIe Control 2 Offset */
+#define TXGBE_PCIE_CTRL2_DUMMY_ENABLE 0x8 /* Dummy Function Enable */
+#define TXGBE_PCIE_CTRL2_LAN_DISABLE 0x2 /* LAN PCI Disable */
+#define TXGBE_PCIE_CTRL2_DISABLE_SELECT 0x1 /* LAN Disable Select */
+
+#define TXGBE_SAN_MAC_ADDR_PORT0_OFFSET 0x0
+#define TXGBE_SAN_MAC_ADDR_PORT1_OFFSET 0x3
+#define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP 0x1
+#define TXGBE_DEVICE_CAPS_FCOE_OFFLOADS 0x2
+#define TXGBE_FW_LESM_PARAMETERS_PTR 0x2
+#define TXGBE_FW_LESM_STATE_1 0x1
+#define TXGBE_FW_LESM_STATE_ENABLED 0x8000 /* LESM Enable bit */
+#define TXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR 0x4
+#define TXGBE_FW_PATCH_VERSION_4 0x7
+#define TXGBE_FCOE_IBA_CAPS_BLK_PTR 0x33 /* iSCSI/FCOE block */
+#define TXGBE_FCOE_IBA_CAPS_FCOE 0x20 /* FCOE flags */
+#define TXGBE_ISCSI_FCOE_BLK_PTR 0x17 /* iSCSI/FCOE block */
+#define TXGBE_ISCSI_FCOE_FLAGS_OFFSET 0x0 /* FCOE flags */
+#define TXGBE_ISCSI_FCOE_FLAGS_ENABLE 0x1 /* FCOE flags enable bit */
+#define TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR 0x17 /* Alt. SAN MAC block */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET 0x0 /* Alt SAN MAC capability */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT0_OFFSET 0x1 /* Alt SAN MAC 0 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT1_OFFSET 0x4 /* Alt SAN MAC 1 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET 0x7 /* Alt WWNN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET 0x8 /* Alt WWPN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_SANMAC 0x0 /* Alt SAN MAC exists */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN 0x1 /* Alt WWN base exists */
+#define TXGBE_DEVICE_CAPS_WOL_PORT0_1 0x4 /* WoL supported on ports 0 & 1 */
+#define TXGBE_DEVICE_CAPS_WOL_PORT0 0x8 /* WoL supported on port 0 */
+#define TXGBE_DEVICE_CAPS_WOL_MASK 0xC /* Mask for WoL capabilities */
+
+/******************************** PCI Bus Info *******************************/
+#define TXGBE_PCI_DEVICE_STATUS 0xAA
+#define TXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING 0x0020
+#define TXGBE_PCI_LINK_STATUS 0xB2
+#define TXGBE_PCI_DEVICE_CONTROL2 0xC8
+#define TXGBE_PCI_LINK_WIDTH 0x3F0
+#define TXGBE_PCI_LINK_WIDTH_1 0x10
+#define TXGBE_PCI_LINK_WIDTH_2 0x20
+#define TXGBE_PCI_LINK_WIDTH_4 0x40
+#define TXGBE_PCI_LINK_WIDTH_8 0x80
+#define TXGBE_PCI_LINK_SPEED 0xF
+#define TXGBE_PCI_LINK_SPEED_2500 0x1
+#define TXGBE_PCI_LINK_SPEED_5000 0x2
+#define TXGBE_PCI_LINK_SPEED_8000 0x3
+#define TXGBE_PCI_HEADER_TYPE_REGISTER 0x0E
+#define TXGBE_PCI_HEADER_TYPE_MULTIFUNC 0x80
+#define TXGBE_PCI_DEVICE_CONTROL2_16ms 0x0005
+
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET 4
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_MASK \
+ (0x0001 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET)
+#define TXGBE_PCIDEVCTRL2_RELAX_ORDER_ENABLE \
+ (0x01 << TXGBE_PCIDEVCTRL2_RELAX_ORDER_OFFSET)
+
+#define TXGBE_PCIDEVCTRL2_TIMEO_MASK 0xf
+#define TXGBE_PCIDEVCTRL2_16_32ms_def 0x0
+#define TXGBE_PCIDEVCTRL2_50_100us 0x1
+#define TXGBE_PCIDEVCTRL2_1_2ms 0x2
+#define TXGBE_PCIDEVCTRL2_16_32ms 0x5
+#define TXGBE_PCIDEVCTRL2_65_130ms 0x6
+#define TXGBE_PCIDEVCTRL2_260_520ms 0x9
+#define TXGBE_PCIDEVCTRL2_1_2s 0xa
+#define TXGBE_PCIDEVCTRL2_4_8s 0xd
+#define TXGBE_PCIDEVCTRL2_17_34s 0xe
+
+
+/******************* Receive Descriptor bit definitions **********************/
+#define TXGBE_RXD_IPSEC_STATUS_SECP 0x00020000U
+#define TXGBE_RXD_IPSEC_ERROR_INVALID_PROTOCOL 0x08000000U
+#define TXGBE_RXD_IPSEC_ERROR_INVALID_LENGTH 0x10000000U
+#define TXGBE_RXD_IPSEC_ERROR_AUTH_FAILED 0x18000000U
+#define TXGBE_RXD_IPSEC_ERROR_BIT_MASK 0x18000000U
+
+#define TXGBE_RXD_NEXTP_MASK 0x000FFFF0U /* Next Descriptor Index */
+#define TXGBE_RXD_NEXTP_SHIFT 0x00000004U
+#define TXGBE_RXD_STAT_MASK 0x000fffffU /* Stat/NEXTP: bit 0-19 */
+#define TXGBE_RXD_STAT_DD 0x00000001U /* Done */
+#define TXGBE_RXD_STAT_EOP 0x00000002U /* End of Packet */
+#define TXGBE_RXD_STAT_CLASS_ID_MASK 0x0000001CU
+#define TXGBE_RXD_STAT_CLASS_ID_TC_RSS 0x00000000U
+#define TXGBE_RXD_STAT_CLASS_ID_FLM 0x00000004U /* FDir Match */
+#define TXGBE_RXD_STAT_CLASS_ID_SYN 0x00000008U
+#define TXGBE_RXD_STAT_CLASS_ID_5_TUPLE 0x0000000CU
+#define TXGBE_RXD_STAT_CLASS_ID_L2_ETYPE 0x00000010U
+#define TXGBE_RXD_STAT_VP 0x00000020U /* IEEE VLAN Pkt */
+#define TXGBE_RXD_STAT_UDPCS 0x00000040U /* UDP xsum calculated */
+#define TXGBE_RXD_STAT_L4CS 0x00000080U /* L4 xsum calculated */
+#define TXGBE_RXD_STAT_IPCS 0x00000100U /* IP xsum calculated */
+#define TXGBE_RXD_STAT_PIF 0x00000200U /* passed in-exact filter */
+#define TXGBE_RXD_STAT_OUTERIPCS 0x00000400U /* Cloud IP xsum calculated*/
+#define TXGBE_RXD_STAT_VEXT 0x00000800U /* 1st VLAN found */
+#define TXGBE_RXD_STAT_LLINT 0x00002000U /* Pkt caused Low Latency
+ * Int */
+#define TXGBE_RXD_STAT_TS 0x00004000U /* IEEE1588 Time Stamp */
+#define TXGBE_RXD_STAT_SECP 0x00008000U /* Security Processing */
+#define TXGBE_RXD_STAT_LB 0x00010000U /* Loopback Status */
+#define TXGBE_RXD_STAT_FCEOFS 0x00020000U /* FCoE EOF/SOF Stat */
+#define TXGBE_RXD_STAT_FCSTAT 0x000C0000U /* FCoE Pkt Stat */
+#define TXGBE_RXD_STAT_FCSTAT_NOMTCH 0x00000000U /* 00: No Ctxt Match */
+#define TXGBE_RXD_STAT_FCSTAT_NODDP 0x00040000U /* 01: Ctxt w/o DDP */
+#define TXGBE_RXD_STAT_FCSTAT_FCPRSP 0x00080000U /* 10: Recv. FCP_RSP */
+#define TXGBE_RXD_STAT_FCSTAT_DDP 0x000C0000U /* 11: Ctxt w/ DDP */
+
+#define TXGBE_RXD_ERR_MASK 0xfff00000U /* RDESC.ERRORS mask */
+#define TXGBE_RXD_ERR_SHIFT 20 /* RDESC.ERRORS shift */
+#define TXGBE_RXD_ERR_FCEOFE 0x80000000U /* FCEOFe/IPE */
+#define TXGBE_RXD_ERR_FCERR 0x00700000U /* FCERR/FDIRERR */
+#define TXGBE_RXD_ERR_FDIR_LEN 0x00100000U /* FDIR Length error */
+#define TXGBE_RXD_ERR_FDIR_DROP 0x00200000U /* FDIR Drop error */
+#define TXGBE_RXD_ERR_FDIR_COLL 0x00400000U /* FDIR Collision error */
+#define TXGBE_RXD_ERR_HBO 0x00800000U /*Header Buffer Overflow */
+#define TXGBE_RXD_ERR_OUTERIPER 0x04000000U /* CRC IP Header error */
+#define TXGBE_RXD_ERR_SECERR_MASK 0x18000000U
+#define TXGBE_RXD_ERR_RXE 0x20000000U /* Any MAC Error */
+#define TXGBE_RXD_ERR_TCPE 0x40000000U /* TCP/UDP Checksum Error */
+#define TXGBE_RXD_ERR_IPE 0x80000000U /* IP Checksum Error */
+
+#define TXGBE_RXDPS_HDRSTAT_HDRSP 0x00008000U
+#define TXGBE_RXDPS_HDRSTAT_HDRLEN_MASK 0x000003FFU
+
+#define TXGBE_RXD_RSSTYPE_MASK 0x0000000FU
+#define TXGBE_RXD_TPID_MASK 0x000001C0U
+#define TXGBE_RXD_TPID_SHIFT 6
+#define TXGBE_RXD_HDRBUFLEN_MASK 0x00007FE0U
+#define TXGBE_RXD_RSCCNT_MASK 0x001E0000U
+#define TXGBE_RXD_RSCCNT_SHIFT 17
+#define TXGBE_RXD_HDRBUFLEN_SHIFT 5
+#define TXGBE_RXD_SPLITHEADER_EN 0x00001000U
+#define TXGBE_RXD_SPH 0x8000
+
+/* RSS Hash results */
+#define TXGBE_RXD_RSSTYPE_NONE 0x00000000U
+#define TXGBE_RXD_RSSTYPE_IPV4_TCP 0x00000001U
+#define TXGBE_RXD_RSSTYPE_IPV4 0x00000002U
+#define TXGBE_RXD_RSSTYPE_IPV6_TCP 0x00000003U
+#define TXGBE_RXD_RSSTYPE_IPV4_SCTP 0x00000004U
+#define TXGBE_RXD_RSSTYPE_IPV6 0x00000005U
+#define TXGBE_RXD_RSSTYPE_IPV6_SCTP 0x00000006U
+#define TXGBE_RXD_RSSTYPE_IPV4_UDP 0x00000007U
+#define TXGBE_RXD_RSSTYPE_IPV6_UDP 0x00000008U
+
+/**
+ * receive packet type
+ * PTYPE:8 = TUN:2 + PKT:2 + TYP:4
+ **/
+/* TUN */
+#define TXGBE_PTYPE_TUN_IPV4 (0x80)
+#define TXGBE_PTYPE_TUN_IPV6 (0xC0)
+
+/* PKT for TUN */
+#define TXGBE_PTYPE_PKT_IPIP (0x00) /* IP+IP */
+#define TXGBE_PTYPE_PKT_IG (0x10) /* IP+GRE */
+#define TXGBE_PTYPE_PKT_IGM (0x20) /* IP+GRE+MAC */
+#define TXGBE_PTYPE_PKT_IGMV (0x30) /* IP+GRE+MAC+VLAN */
+/* PKT for !TUN */
+#define TXGBE_PTYPE_PKT_MAC (0x10)
+#define TXGBE_PTYPE_PKT_IP (0x20)
+#define TXGBE_PTYPE_PKT_FCOE (0x30)
+
+/* TYP for PKT=mac */
+#define TXGBE_PTYPE_TYP_MAC (0x01)
+#define TXGBE_PTYPE_TYP_TS (0x02) /* time sync */
+#define TXGBE_PTYPE_TYP_FIP (0x03)
+#define TXGBE_PTYPE_TYP_LLDP (0x04)
+#define TXGBE_PTYPE_TYP_CNM (0x05)
+#define TXGBE_PTYPE_TYP_EAPOL (0x06)
+#define TXGBE_PTYPE_TYP_ARP (0x07)
+/* TYP for PKT=ip */
+#define TXGBE_PTYPE_PKT_IPV6 (0x08)
+#define TXGBE_PTYPE_TYP_IPFRAG (0x01)
+#define TXGBE_PTYPE_TYP_IP (0x02)
+#define TXGBE_PTYPE_TYP_UDP (0x03)
+#define TXGBE_PTYPE_TYP_TCP (0x04)
+#define TXGBE_PTYPE_TYP_SCTP (0x05)
+/* TYP for PKT=fcoe */
+#define TXGBE_PTYPE_PKT_VFT (0x08)
+#define TXGBE_PTYPE_TYP_FCOE (0x00)
+#define TXGBE_PTYPE_TYP_FCDATA (0x01)
+#define TXGBE_PTYPE_TYP_FCRDY (0x02)
+#define TXGBE_PTYPE_TYP_FCRSP (0x03)
+#define TXGBE_PTYPE_TYP_FCOTHER (0x04)
+
+/* Packet type non-ip values */
+enum txgbe_l2_ptypes {
+ TXGBE_PTYPE_L2_ABORTED = (TXGBE_PTYPE_PKT_MAC),
+ TXGBE_PTYPE_L2_MAC = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_MAC),
+ TXGBE_PTYPE_L2_TS = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_TS),
+ TXGBE_PTYPE_L2_FIP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_FIP),
+ TXGBE_PTYPE_L2_LLDP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_LLDP),
+ TXGBE_PTYPE_L2_CNM = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_CNM),
+ TXGBE_PTYPE_L2_EAPOL = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_EAPOL),
+ TXGBE_PTYPE_L2_ARP = (TXGBE_PTYPE_PKT_MAC | TXGBE_PTYPE_TYP_ARP),
+
+ TXGBE_PTYPE_L2_IPV4_FRAG = (TXGBE_PTYPE_PKT_IP |
+ TXGBE_PTYPE_TYP_IPFRAG),
+ TXGBE_PTYPE_L2_IPV4 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_IP),
+ TXGBE_PTYPE_L2_IPV4_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_UDP),
+ TXGBE_PTYPE_L2_IPV4_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_TCP),
+ TXGBE_PTYPE_L2_IPV4_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_TYP_SCTP),
+ TXGBE_PTYPE_L2_IPV6_FRAG = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+ TXGBE_PTYPE_TYP_IPFRAG),
+ TXGBE_PTYPE_L2_IPV6 = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+ TXGBE_PTYPE_TYP_IP),
+ TXGBE_PTYPE_L2_IPV6_UDP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+ TXGBE_PTYPE_TYP_UDP),
+ TXGBE_PTYPE_L2_IPV6_TCP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+ TXGBE_PTYPE_TYP_TCP),
+ TXGBE_PTYPE_L2_IPV6_SCTP = (TXGBE_PTYPE_PKT_IP | TXGBE_PTYPE_PKT_IPV6 |
+ TXGBE_PTYPE_TYP_SCTP),
+
+ TXGBE_PTYPE_L2_FCOE = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_TYP_FCOE),
+ TXGBE_PTYPE_L2_FCOE_FCDATA = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_TYP_FCDATA),
+ TXGBE_PTYPE_L2_FCOE_FCRDY = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_TYP_FCRDY),
+ TXGBE_PTYPE_L2_FCOE_FCRSP = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_TYP_FCRSP),
+ TXGBE_PTYPE_L2_FCOE_FCOTHER = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_TYP_FCOTHER),
+ TXGBE_PTYPE_L2_FCOE_VFT = (TXGBE_PTYPE_PKT_FCOE | TXGBE_PTYPE_PKT_VFT),
+ TXGBE_PTYPE_L2_FCOE_VFT_FCDATA = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCDATA),
+ TXGBE_PTYPE_L2_FCOE_VFT_FCRDY = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRDY),
+ TXGBE_PTYPE_L2_FCOE_VFT_FCRSP = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCRSP),
+ TXGBE_PTYPE_L2_FCOE_VFT_FCOTHER = (TXGBE_PTYPE_PKT_FCOE |
+ TXGBE_PTYPE_PKT_VFT | TXGBE_PTYPE_TYP_FCOTHER),
+
+ TXGBE_PTYPE_L2_TUN4_MAC = (TXGBE_PTYPE_TUN_IPV4 | TXGBE_PTYPE_PKT_IGM),
+ TXGBE_PTYPE_L2_TUN6_MAC = (TXGBE_PTYPE_TUN_IPV6 | TXGBE_PTYPE_PKT_IGM),
+};
+
+#define TXGBE_RXD_PKTTYPE(_rxd) \
+ ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 9) & 0xFF)
+#define TXGBE_PTYPE_TUN(_pt) ((_pt) & 0xC0)
+#define TXGBE_PTYPE_PKT(_pt) ((_pt) & 0x30)
+#define TXGBE_PTYPE_TYP(_pt) ((_pt) & 0x0F)
+#define TXGBE_PTYPE_TYPL4(_pt) ((_pt) & 0x07)
+
+#define TXGBE_RXD_IPV6EX(_rxd) \
+ ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 6) & 0x1)
+
+/* Security Processing bit Indication */
+#define TXGBE_RXD_LNKSEC_STATUS_SECP 0x00020000U
+#define TXGBE_RXD_LNKSEC_ERROR_NO_SA_MATCH 0x08000000U
+#define TXGBE_RXD_LNKSEC_ERROR_REPLAY_ERROR 0x10000000U
+#define TXGBE_RXD_LNKSEC_ERROR_BIT_MASK 0x18000000U
+#define TXGBE_RXD_LNKSEC_ERROR_BAD_SIG 0x18000000U
+
+/* Masks to determine if packets should be dropped due to frame errors */
+#define TXGBE_RXD_ERR_FRAME_ERR_MASK TXGBE_RXD_ERR_RXE
+
+/*********************** Adv Transmit Descriptor Config Masks ****************/
+#define TXGBE_TXD_DTALEN_MASK 0x0000FFFFU /* Data buf length(bytes) */
+#define TXGBE_TXD_MAC_LINKSEC 0x00040000U /* Insert LinkSec */
+#define TXGBE_TXD_MAC_TSTAMP 0x00080000U /* IEEE1588 time stamp */
+#define TXGBE_TXD_IPSEC_SA_INDEX_MASK 0x000003FFU /* IPSec SA index */
+#define TXGBE_TXD_IPSEC_ESP_LEN_MASK 0x000001FFU /* IPSec ESP length */
+#define TXGBE_TXD_DTYP_MASK 0x00F00000U /* DTYP mask */
+#define TXGBE_TXD_DTYP_CTXT 0x00100000U /* Adv Context Desc */
+#define TXGBE_TXD_DTYP_DATA 0x00000000U /* Adv Data Descriptor */
+#define TXGBE_TXD_EOP 0x01000000U /* End of Packet */
+#define TXGBE_TXD_IFCS 0x02000000U /* Insert FCS */
+#define TXGBE_TXD_LINKSEC 0x04000000U /* enable linksec */
+#define TXGBE_TXD_RS 0x08000000U /* Report Status */
+#define TXGBE_TXD_ECU 0x10000000U /* DDP hdr type or iSCSI */
+#define TXGBE_TXD_QCN 0x20000000U /* cntag insertion enable */
+#define TXGBE_TXD_VLE 0x40000000U /* VLAN pkt enable */
+#define TXGBE_TXD_TSE 0x80000000U /* TCP Seg enable */
+#define TXGBE_TXD_STAT_DD 0x00000001U /* Descriptor Done */
+#define TXGBE_TXD_IDX_SHIFT 4 /* Adv desc Index shift */
+#define TXGBE_TXD_CC 0x00000080U /* Check Context */
+#define TXGBE_TXD_IPSEC 0x00000100U /* enable ipsec esp */
+#define TXGBE_TXD_IIPCS 0x00000400U
+#define TXGBE_TXD_EIPCS 0x00000800U
+#define TXGBE_TXD_L4CS 0x00000200U
+#define TXGBE_TXD_PAYLEN_SHIFT 13 /* Adv desc PAYLEN shift */
+#define TXGBE_TXD_MACLEN_SHIFT 9 /* Adv ctxt desc mac len shift */
+#define TXGBE_TXD_VLAN_SHIFT 16 /* Adv ctxt vlan tag shift */
+#define TXGBE_TXD_TAG_TPID_SEL_SHIFT 11
+#define TXGBE_TXD_IPSEC_TYPE_SHIFT 14
+#define TXGBE_TXD_ENC_SHIFT 15
+
+#define TXGBE_TXD_TUCMD_IPSEC_TYPE_ESP 0x00004000U /* IPSec Type ESP */
+#define TXGBE_TXD_TUCMD_IPSEC_ENCRYPT_EN 0x00008000/* ESP Encrypt Enable */
+#define TXGBE_TXD_TUCMD_FCOE 0x00010000U /* FCoE Frame Type */
+#define TXGBE_TXD_FCOEF_EOF_MASK (0x3 << 10) /* FC EOF index */
+#define TXGBE_TXD_FCOEF_SOF ((1 << 2) << 10) /* FC SOF index */
+#define TXGBE_TXD_FCOEF_PARINC ((1 << 3) << 10) /* Rel_Off in F_CTL */
+#define TXGBE_TXD_FCOEF_ORIE ((1 << 4) << 10) /* Orientation End */
+#define TXGBE_TXD_FCOEF_ORIS ((1 << 5) << 10) /* Orientation Start */
+#define TXGBE_TXD_FCOEF_EOF_N (0x0 << 10) /* 00: EOFn */
+#define TXGBE_TXD_FCOEF_EOF_T (0x1 << 10) /* 01: EOFt */
+#define TXGBE_TXD_FCOEF_EOF_NI (0x2 << 10) /* 10: EOFni */
+#define TXGBE_TXD_FCOEF_EOF_A (0x3 << 10) /* 11: EOFa */
+#define TXGBE_TXD_L4LEN_SHIFT 8 /* Adv ctxt L4LEN shift */
+#define TXGBE_TXD_MSS_SHIFT 16 /* Adv ctxt MSS shift */
+
+#define TXGBE_TXD_OUTER_IPLEN_SHIFT 12 /* Adv ctxt OUTERIPLEN shift */
+#define TXGBE_TXD_TUNNEL_LEN_SHIFT 21 /* Adv ctxt TUNNELLEN shift */
+#define TXGBE_TXD_TUNNEL_TYPE_SHIFT 11 /* Adv Tx Desc Tunnel Type shift */
+#define TXGBE_TXD_TUNNEL_DECTTL_SHIFT 27 /* Adv ctxt DECTTL shift */
+#define TXGBE_TXD_TUNNEL_UDP (0x0ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
+#define TXGBE_TXD_TUNNEL_GRE (0x1ULL << TXGBE_TXD_TUNNEL_TYPE_SHIFT)
+
+/************ txgbe_type.h ************/
+/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
+#define TXGBE_REQ_TX_DESCRIPTOR_MULTIPLE 8
+#define TXGBE_REQ_RX_DESCRIPTOR_MULTIPLE 8
+#define TXGBE_REQ_TX_BUFFER_GRANULARITY 1024
+
+/* Vlan-specific macros */
+#define TXGBE_RX_DESC_SPECIAL_VLAN_MASK 0x0FFF /* VLAN ID in lower 12 bits */
+#define TXGBE_RX_DESC_SPECIAL_PRI_MASK 0xE000 /* Priority in upper 3 bits */
+#define TXGBE_RX_DESC_SPECIAL_PRI_SHIFT 0x000D /* Priority in upper 3 of 16 */
+#define TXGBE_TX_DESC_SPECIAL_PRI_SHIFT TXGBE_RX_DESC_SPECIAL_PRI_SHIFT
+
+/* Transmit Descriptor */
+union txgbe_tx_desc {
+ struct {
+ __le64 buffer_addr; /* Address of descriptor's data buf */
+ __le32 cmd_type_len;
+ __le32 olinfo_status;
+ } read;
+ struct {
+ __le64 rsvd; /* Reserved */
+ __le32 nxtseq_seed;
+ __le32 status;
+ } wb;
+};
+
+/* Receive Descriptor */
+union txgbe_rx_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ } read;
+ struct {
+ struct {
+ union {
+ __le32 data;
+ struct {
+ __le16 pkt_info; /* RSS, Pkt type */
+ __le16 hdr_info; /* Splithdr, hdrlen */
+ } hs_rss;
+ } lo_dword;
+ union {
+ __le32 rss; /* RSS Hash */
+ struct {
+ __le16 ip_id; /* IP id */
+ __le16 csum; /* Packet Checksum */
+ } csum_ip;
+ } hi_dword;
+ } lower;
+ struct {
+ __le32 status_error; /* ext status/error */
+ __le16 length; /* Packet length */
+ __le16 vlan; /* VLAN tag */
+ } upper;
+ } wb; /* writeback */
+};
+
+/* Context descriptors */
+struct txgbe_tx_context_desc {
+ __le32 vlan_macip_lens;
+ __le32 seqnum_seed;
+ __le32 type_tucmd_mlhl;
+ __le32 mss_l4len_idx;
+};
+
+/************************* Flow Directory HASH *******************************/
+/* Software ATR hash keys */
+#define TXGBE_ATR_BUCKET_HASH_KEY 0x3DAD14E2
+#define TXGBE_ATR_SIGNATURE_HASH_KEY 0x174D3614
+
+/* Software ATR input stream values and masks */
+#define TXGBE_ATR_HASH_MASK 0x7fff
+#define TXGBE_ATR_L4TYPE_MASK 0x3
+#define TXGBE_ATR_L4TYPE_UDP 0x1
+#define TXGBE_ATR_L4TYPE_TCP 0x2
+#define TXGBE_ATR_L4TYPE_SCTP 0x3
+#define TXGBE_ATR_L4TYPE_IPV6_MASK 0x4
+#define TXGBE_ATR_L4TYPE_TUNNEL_MASK 0x10
+enum txgbe_atr_flow_type {
+ TXGBE_ATR_FLOW_TYPE_IPV4 = 0x0,
+ TXGBE_ATR_FLOW_TYPE_UDPV4 = 0x1,
+ TXGBE_ATR_FLOW_TYPE_TCPV4 = 0x2,
+ TXGBE_ATR_FLOW_TYPE_SCTPV4 = 0x3,
+ TXGBE_ATR_FLOW_TYPE_IPV6 = 0x4,
+ TXGBE_ATR_FLOW_TYPE_UDPV6 = 0x5,
+ TXGBE_ATR_FLOW_TYPE_TCPV6 = 0x6,
+ TXGBE_ATR_FLOW_TYPE_SCTPV6 = 0x7,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4 = 0x10,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4 = 0x11,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4 = 0x12,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4 = 0x13,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6 = 0x14,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6 = 0x15,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6 = 0x16,
+ TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6 = 0x17,
+};
+
+/* Flow Director ATR input struct. */
+union txgbe_atr_input {
+ /*
+ * Byte layout in order, all values with MSB first:
+ *
+ * vm_pool - 1 byte
+ * flow_type - 1 byte
+ * vlan_id - 2 bytes
+ * src_ip - 16 bytes
+ * inner_mac - 6 bytes
+ * cloud_mode - 2 bytes
+ * tni_vni - 4 bytes
+ * dst_ip - 16 bytes
+ * src_port - 2 bytes
+ * dst_port - 2 bytes
+ * flex_bytes - 2 bytes
+ * bkt_hash - 2 bytes
+ */
+ struct {
+ u8 vm_pool;
+ u8 flow_type;
+ __be16 vlan_id;
+ __be32 dst_ip[4];
+ __be32 src_ip[4];
+ __be16 src_port;
+ __be16 dst_port;
+ __be16 flex_bytes;
+ __be16 bkt_hash;
+ } formatted;
+ __be32 dword_stream[11];
+};
+
+/* Flow Director compressed ATR hash input struct */
+union txgbe_atr_hash_dword {
+ struct {
+ u8 vm_pool;
+ u8 flow_type;
+ __be16 vlan_id;
+ } formatted;
+ __be32 ip;
+ struct {
+ __be16 src;
+ __be16 dst;
+ } port;
+ __be16 flex_bytes;
+ __be32 dword;
+};
+
+
+/****************** Manageablility Host Interface defines ********************/
+#define TXGBE_HI_MAX_BLOCK_BYTE_LENGTH 256 /* Num of bytes in range */
+#define TXGBE_HI_MAX_BLOCK_DWORD_LENGTH 64 /* Num of dwords in range */
+#define TXGBE_HI_COMMAND_TIMEOUT 5000 /* Process HI command limit */
+#define TXGBE_HI_FLASH_ERASE_TIMEOUT 5000 /* Process Erase command limit */
+#define TXGBE_HI_FLASH_UPDATE_TIMEOUT 5000 /* Process Update command limit */
+#define TXGBE_HI_FLASH_VERIFY_TIMEOUT 60000 /* Process Apply command limit */
+#define TXGBE_HI_PHY_MGMT_REQ_TIMEOUT 2000 /* Wait up to 2 seconds */
+
+/* CEM Support */
+#define FW_CEM_HDR_LEN 0x4
+#define FW_CEM_CMD_DRIVER_INFO 0xDD
+#define FW_CEM_CMD_DRIVER_INFO_LEN 0x5
+#define FW_CEM_CMD_RESERVED 0X0
+#define FW_CEM_UNUSED_VER 0x0
+#define FW_CEM_MAX_RETRIES 3
+#define FW_CEM_RESP_STATUS_SUCCESS 0x1
+#define FW_READ_SHADOW_RAM_CMD 0x31
+#define FW_READ_SHADOW_RAM_LEN 0x6
+#define FW_WRITE_SHADOW_RAM_CMD 0x33
+#define FW_WRITE_SHADOW_RAM_LEN 0xA /* 8 plus 1 WORD to write */
+#define FW_SHADOW_RAM_DUMP_CMD 0x36
+#define FW_SHADOW_RAM_DUMP_LEN 0
+#define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */
+#define FW_NVM_DATA_OFFSET 3
+#define FW_MAX_READ_BUFFER_SIZE 244
+#define FW_DISABLE_RXEN_CMD 0xDE
+#define FW_DISABLE_RXEN_LEN 0x1
+#define FW_PHY_MGMT_REQ_CMD 0x20
+#define FW_RESET_CMD 0xDF
+#define FW_RESET_LEN 0x2
+#define FW_SETUP_MAC_LINK_CMD 0xE0
+#define FW_SETUP_MAC_LINK_LEN 0x2
+#define FW_FLASH_UPGRADE_START_CMD 0xE3
+#define FW_FLASH_UPGRADE_START_LEN 0x1
+#define FW_FLASH_UPGRADE_WRITE_CMD 0xE4
+#define FW_FLASH_UPGRADE_VERIFY_CMD 0xE5
+#define FW_FLASH_UPGRADE_VERIFY_LEN 0x4
+
+/* Host Interface Command Structures */
+struct txgbe_hic_hdr {
+ u8 cmd;
+ u8 buf_len;
+ union {
+ u8 cmd_resv;
+ u8 ret_status;
+ } cmd_or_resp;
+ u8 checksum;
+};
+
+struct txgbe_hic_hdr2_req {
+ u8 cmd;
+ u8 buf_lenh;
+ u8 buf_lenl;
+ u8 checksum;
+};
+
+struct txgbe_hic_hdr2_rsp {
+ u8 cmd;
+ u8 buf_lenl;
+ u8 buf_lenh_status; /* 7-5: high bits of buf_len, 4-0: status */
+ u8 checksum;
+};
+
+union txgbe_hic_hdr2 {
+ struct txgbe_hic_hdr2_req req;
+ struct txgbe_hic_hdr2_rsp rsp;
+};
+
+struct txgbe_hic_drv_info {
+ struct txgbe_hic_hdr hdr;
+ u8 port_num;
+ u8 ver_sub;
+ u8 ver_build;
+ u8 ver_min;
+ u8 ver_maj;
+ u8 pad; /* end spacing to ensure length is mult. of dword */
+ u16 pad2; /* end spacing to ensure length is mult. of dword2 */
+};
+
+/* These need to be dword aligned */
+struct txgbe_hic_read_shadow_ram {
+ union txgbe_hic_hdr2 hdr;
+ u32 address;
+ u16 length;
+ u16 pad2;
+ u16 data;
+ u16 pad3;
+};
+
+struct txgbe_hic_write_shadow_ram {
+ union txgbe_hic_hdr2 hdr;
+ u32 address;
+ u16 length;
+ u16 pad2;
+ u16 data;
+ u16 pad3;
+};
+
+struct txgbe_hic_disable_rxen {
+ struct txgbe_hic_hdr hdr;
+ u8 port_number;
+ u8 pad2;
+ u16 pad3;
+};
+
+struct txgbe_hic_reset {
+ struct txgbe_hic_hdr hdr;
+ u16 lan_id;
+ u16 reset_type;
+};
+
+struct txgbe_hic_phy_cfg {
+ struct txgbe_hic_hdr hdr;
+ u8 lan_id;
+ u8 phy_mode;
+ u16 phy_speed;
+};
+
+enum txgbe_module_id {
+ TXGBE_MODULE_EEPROM = 0,
+ TXGBE_MODULE_FIRMWARE,
+ TXGBE_MODULE_HARDWARE,
+ TXGBE_MODULE_PCIE
+};
+
+struct txgbe_hic_upg_start {
+ struct txgbe_hic_hdr hdr;
+ u8 module_id;
+ u8 pad2;
+ u16 pad3;
+};
+
+struct txgbe_hic_upg_write {
+ struct txgbe_hic_hdr hdr;
+ u8 data_len;
+ u8 eof_flag;
+ u16 check_sum;
+ u32 data[62];
+};
+
+enum txgbe_upg_flag {
+ TXGBE_RESET_NONE = 0,
+ TXGBE_RESET_FIRMWARE,
+ TXGBE_RELOAD_EEPROM,
+ TXGBE_RESET_LAN
+};
+
+struct txgbe_hic_upg_verify {
+ struct txgbe_hic_hdr hdr;
+ u32 action_flag;
+};
+
+/* Number of 100 microseconds we wait for PCI Express master disable */
+#define TXGBE_PCI_MASTER_DISABLE_TIMEOUT 800
+
+/* Check whether address is multicast. This is little-endian specific check.*/
+#define TXGBE_IS_MULTICAST(Address) \
+ (bool)(((u8 *)(Address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define TXGBE_IS_BROADCAST(Address) \
+ ((((u8 *)(Address))[0] == ((u8)0xff)) && \
+ (((u8 *)(Address))[1] == ((u8)0xff)))
+
+/* DCB registers */
+#define TXGBE_DCB_MAX_TRAFFIC_CLASS 8
+
+/* Power Management */
+/* DMA Coalescing configuration */
+struct txgbe_dmac_config {
+ u16 watchdog_timer; /* usec units */
+ bool fcoe_en;
+ u32 link_speed;
+ u8 fcoe_tc;
+ u8 num_tcs;
+};
+
+
+/* Autonegotiation advertised speeds */
+typedef u32 txgbe_autoneg_advertised;
+/* Link speed */
+#define TXGBE_LINK_SPEED_UNKNOWN 0
+#define TXGBE_LINK_SPEED_100_FULL 1
+#define TXGBE_LINK_SPEED_1GB_FULL 2
+#define TXGBE_LINK_SPEED_10GB_FULL 4
+#define TXGBE_LINK_SPEED_10_FULL 8
+#define TXGBE_LINK_SPEED_AUTONEG (TXGBE_LINK_SPEED_100_FULL | \
+ TXGBE_LINK_SPEED_1GB_FULL | \
+ TXGBE_LINK_SPEED_10GB_FULL | \
+ TXGBE_LINK_SPEED_10_FULL)
+
+/* Physical layer type */
+typedef u32 txgbe_physical_layer;
+#define TXGBE_PHYSICAL_LAYER_UNKNOWN 0
+#define TXGBE_PHYSICAL_LAYER_10GBASE_T 0x0001
+#define TXGBE_PHYSICAL_LAYER_1000BASE_T 0x0002
+#define TXGBE_PHYSICAL_LAYER_100BASE_TX 0x0004
+#define TXGBE_PHYSICAL_LAYER_SFP_PLUS_CU 0x0008
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LR 0x0010
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LRM 0x0020
+#define TXGBE_PHYSICAL_LAYER_10GBASE_SR 0x0040
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KX4 0x0080
+#define TXGBE_PHYSICAL_LAYER_1000BASE_KX 0x0200
+#define TXGBE_PHYSICAL_LAYER_1000BASE_BX 0x0400
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KR 0x0800
+#define TXGBE_PHYSICAL_LAYER_10GBASE_XAUI 0x1000
+#define TXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA 0x2000
+#define TXGBE_PHYSICAL_LAYER_1000BASE_SX 0x4000
+
+
+/* Special PHY Init Routine */
+#define TXGBE_PHY_INIT_OFFSET_NL 0x002B
+#define TXGBE_PHY_INIT_END_NL 0xFFFF
+#define TXGBE_CONTROL_MASK_NL 0xF000
+#define TXGBE_DATA_MASK_NL 0x0FFF
+#define TXGBE_CONTROL_SHIFT_NL 12
+#define TXGBE_DELAY_NL 0
+#define TXGBE_DATA_NL 1
+#define TXGBE_CONTROL_NL 0x000F
+#define TXGBE_CONTROL_EOL_NL 0x0FFF
+#define TXGBE_CONTROL_SOL_NL 0x0000
+
+/* Flow Control Data Sheet defined values
+ * Calculation and defines taken from 802.1bb Annex O
+ */
+
+/* BitTimes (BT) conversion */
+#define TXGBE_BT2KB(BT) ((BT + (8 * 1024 - 1)) / (8 * 1024))
+#define TXGBE_B2BT(BT) (BT * 8)
+
+/* Calculate Delay to respond to PFC */
+#define TXGBE_PFC_D 672
+
+/* Calculate Cable Delay */
+#define TXGBE_CABLE_DC 5556 /* Delay Copper */
+#define TXGBE_CABLE_DO 5000 /* Delay Optical */
+
+/* Calculate Interface Delay X540 */
+#define TXGBE_PHY_DC 25600 /* Delay 10G BASET */
+#define TXGBE_MAC_DC 8192 /* Delay Copper XAUI interface */
+#define TXGBE_XAUI_DC (2 * 2048) /* Delay Copper Phy */
+
+#define TXGBE_ID_X540 (TXGBE_MAC_DC + TXGBE_XAUI_DC + TXGBE_PHY_DC)
+
+/* Calculate Interface Delay */
+#define TXGBE_PHY_D 12800
+#define TXGBE_MAC_D 4096
+#define TXGBE_XAUI_D (2 * 1024)
+
+#define TXGBE_ID (TXGBE_MAC_D + TXGBE_XAUI_D + TXGBE_PHY_D)
+
+/* Calculate Delay incurred from higher layer */
+#define TXGBE_HD 6144
+
+/* Calculate PCI Bus delay for low thresholds */
+#define TXGBE_PCI_DELAY 10000
+
+/* Calculate X540 delay value in bit times */
+#define TXGBE_DV_X540(_max_frame_link, _max_frame_tc) \
+ ((36 * \
+ (TXGBE_B2BT(_max_frame_link) + \
+ TXGBE_PFC_D + \
+ (2 * TXGBE_CABLE_DC) + \
+ (2 * TXGBE_ID_X540) + \
+ TXGBE_HD) / 25 + 1) + \
+ 2 * TXGBE_B2BT(_max_frame_tc))
+
+
+/* Calculate delay value in bit times */
+#define TXGBE_DV(_max_frame_link, _max_frame_tc) \
+ ((36 * \
+ (TXGBE_B2BT(_max_frame_link) + \
+ TXGBE_PFC_D + \
+ (2 * TXGBE_CABLE_DC) + \
+ (2 * TXGBE_ID) + \
+ TXGBE_HD) / 25 + 1) + \
+ 2 * TXGBE_B2BT(_max_frame_tc))
+
+/* Calculate low threshold delay values */
+#define TXGBE_LOW_DV_X540(_max_frame_tc) \
+ (2 * TXGBE_B2BT(_max_frame_tc) + \
+ (36 * TXGBE_PCI_DELAY / 25) + 1)
+
+#define TXGBE_LOW_DV(_max_frame_tc) \
+ (2 * TXGBE_LOW_DV_X540(_max_frame_tc))
+
+
+/*
+ * Unavailable: The FCoE Boot Option ROM is not present in the flash.
+ * Disabled: Present; boot order is not set for any targets on the port.
+ * Enabled: Present; boot order is set for at least one target on the port.
+ */
+enum txgbe_fcoe_boot_status {
+ txgbe_fcoe_bootstatus_disabled = 0,
+ txgbe_fcoe_bootstatus_enabled = 1,
+ txgbe_fcoe_bootstatus_unavailable = 0xFFFF
+};
+
+enum txgbe_eeprom_type {
+ txgbe_eeprom_uninitialized = 0,
+ txgbe_eeprom_spi,
+ txgbe_flash,
+ txgbe_eeprom_none /* No NVM support */
+};
+
+enum txgbe_phy_type {
+ txgbe_phy_unknown = 0,
+ txgbe_phy_none,
+ txgbe_phy_tn,
+ txgbe_phy_aq,
+ txgbe_phy_cu_unknown,
+ txgbe_phy_qt,
+ txgbe_phy_xaui,
+ txgbe_phy_nl,
+ txgbe_phy_sfp_passive_tyco,
+ txgbe_phy_sfp_passive_unknown,
+ txgbe_phy_sfp_active_unknown,
+ txgbe_phy_sfp_avago,
+ txgbe_phy_sfp_ftl,
+ txgbe_phy_sfp_ftl_active,
+ txgbe_phy_sfp_unknown,
+ txgbe_phy_sfp_intel,
+ txgbe_phy_sfp_unsupported, /*Enforce bit set with unsupported module*/
+ txgbe_phy_generic
+};
+
+/*
+ * SFP+ module type IDs:
+ *
+ * ID Module Type
+ * =============
+ * 0 SFP_DA_CU
+ * 1 SFP_SR
+ * 2 SFP_LR
+ * 3 SFP_DA_CU_CORE0
+ * 4 SFP_DA_CU_CORE1
+ * 5 SFP_SR/LR_CORE0
+ * 6 SFP_SR/LR_CORE1
+ */
+enum txgbe_sfp_type {
+ txgbe_sfp_type_da_cu = 0,
+ txgbe_sfp_type_sr = 1,
+ txgbe_sfp_type_lr = 2,
+ txgbe_sfp_type_da_cu_core0 = 3,
+ txgbe_sfp_type_da_cu_core1 = 4,
+ txgbe_sfp_type_srlr_core0 = 5,
+ txgbe_sfp_type_srlr_core1 = 6,
+ txgbe_sfp_type_da_act_lmt_core0 = 7,
+ txgbe_sfp_type_da_act_lmt_core1 = 8,
+ txgbe_sfp_type_1g_cu_core0 = 9,
+ txgbe_sfp_type_1g_cu_core1 = 10,
+ txgbe_sfp_type_1g_sx_core0 = 11,
+ txgbe_sfp_type_1g_sx_core1 = 12,
+ txgbe_sfp_type_1g_lx_core0 = 13,
+ txgbe_sfp_type_1g_lx_core1 = 14,
+ txgbe_sfp_type_not_present = 0xFFFE,
+ txgbe_sfp_type_unknown = 0xFFFF
+};
+
+enum txgbe_media_type {
+ txgbe_media_type_unknown = 0,
+ txgbe_media_type_fiber,
+ txgbe_media_type_copper,
+ txgbe_media_type_backplane,
+ txgbe_media_type_virtual
+};
+
+/* Flow Control Settings */
+enum txgbe_fc_mode {
+ txgbe_fc_none = 0,
+ txgbe_fc_rx_pause,
+ txgbe_fc_tx_pause,
+ txgbe_fc_full,
+ txgbe_fc_default
+};
+
+/* Smart Speed Settings */
+#define TXGBE_SMARTSPEED_MAX_RETRIES 3
+enum txgbe_smart_speed {
+ txgbe_smart_speed_auto = 0,
+ txgbe_smart_speed_on,
+ txgbe_smart_speed_off
+};
+
+/* PCI bus types */
+enum txgbe_bus_type {
+ txgbe_bus_type_unknown = 0,
+ txgbe_bus_type_pci,
+ txgbe_bus_type_pcix,
+ txgbe_bus_type_pci_express,
+ txgbe_bus_type_internal,
+ txgbe_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum txgbe_bus_speed {
+ txgbe_bus_speed_unknown = 0,
+ txgbe_bus_speed_33 = 33,
+ txgbe_bus_speed_66 = 66,
+ txgbe_bus_speed_100 = 100,
+ txgbe_bus_speed_120 = 120,
+ txgbe_bus_speed_133 = 133,
+ txgbe_bus_speed_2500 = 2500,
+ txgbe_bus_speed_5000 = 5000,
+ txgbe_bus_speed_8000 = 8000,
+ txgbe_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum txgbe_bus_width {
+ txgbe_bus_width_unknown = 0,
+ txgbe_bus_width_pcie_x1 = 1,
+ txgbe_bus_width_pcie_x2 = 2,
+ txgbe_bus_width_pcie_x4 = 4,
+ txgbe_bus_width_pcie_x8 = 8,
+ txgbe_bus_width_32 = 32,
+ txgbe_bus_width_64 = 64,
+ txgbe_bus_width_reserved
+};
+
+struct txgbe_addr_filter_info {
+ u32 num_mc_addrs;
+ u32 rar_used_count;
+ u32 mta_in_use;
+ u32 overflow_promisc;
+ bool user_set_promisc;
+};
+
+/* Bus parameters */
+struct txgbe_bus_info {
+ enum txgbe_bus_speed speed;
+ enum txgbe_bus_width width;
+ enum txgbe_bus_type type;
+
+ u16 func;
+ u16 lan_id;
+};
+
+/* Flow control parameters */
+struct txgbe_fc_info {
+ u32 high_water[TXGBE_DCB_MAX_TRAFFIC_CLASS]; /* Flow Ctrl High-water */
+ u32 low_water[TXGBE_DCB_MAX_TRAFFIC_CLASS]; /* Flow Ctrl Low-water */
+ u16 pause_time; /* Flow Control Pause timer */
+ bool send_xon; /* Flow control send XON */
+ bool strict_ieee; /* Strict IEEE mode */
+ bool disable_fc_autoneg; /* Do not autonegotiate FC */
+ bool fc_was_autonegged; /* Is current_mode the result of autonegging? */
+ enum txgbe_fc_mode current_mode; /* FC mode in effect */
+ enum txgbe_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+/* Statistics counters collected by the MAC */
+struct txgbe_hw_stats {
+ u64 crcerrs;
+ u64 illerrc;
+ u64 errbc;
+ u64 mspdc;
+ u64 mpctotal;
+ u64 mpc[8];
+ u64 mlfc;
+ u64 mrfc;
+ u64 rlec;
+ u64 lxontxc;
+ u64 lxonrxc;
+ u64 lxofftxc;
+ u64 lxoffrxc;
+ u64 pxontxc[8];
+ u64 pxonrxc[8];
+ u64 pxofftxc[8];
+ u64 pxoffrxc[8];
+ u64 prc64;
+ u64 prc127;
+ u64 prc255;
+ u64 prc511;
+ u64 prc1023;
+ u64 prc1522;
+ u64 gprc;
+ u64 bprc;
+ u64 mprc;
+ u64 gptc;
+ u64 gorc;
+ u64 gotc;
+ u64 rnbc[8];
+ u64 ruc;
+ u64 rfc;
+ u64 roc;
+ u64 rjc;
+ u64 mngprc;
+ u64 mngpdc;
+ u64 mngptc;
+ u64 tor;
+ u64 tpr;
+ u64 tpt;
+ u64 ptc64;
+ u64 ptc127;
+ u64 ptc255;
+ u64 ptc511;
+ u64 ptc1023;
+ u64 ptc1522;
+ u64 mptc;
+ u64 bptc;
+ u64 xec;
+ u64 qprc[16];
+ u64 qptc[16];
+ u64 qbrc[16];
+ u64 qbtc[16];
+ u64 qprdc[16];
+ u64 pxon2offc[8];
+ u64 fdirustat_add;
+ u64 fdirustat_remove;
+ u64 fdirfstat_fadd;
+ u64 fdirfstat_fremove;
+ u64 fdirmatch;
+ u64 fdirmiss;
+ u64 fccrc;
+ u64 fclast;
+ u64 fcoerpdc;
+ u64 fcoeprc;
+ u64 fcoeptc;
+ u64 fcoedwrc;
+ u64 fcoedwtc;
+ u64 fcoe_noddp;
+ u64 fcoe_noddp_ext_buff;
+ u64 ldpcec;
+ u64 pcrc8ec;
+ u64 b2ospc;
+ u64 b2ogprc;
+ u64 o2bgptc;
+ u64 o2bspc;
+};
+
+/* forward declaration */
+struct txgbe_hw;
+
+/* iterator type for walking multicast address lists */
+typedef u8* (*txgbe_mc_addr_itr) (struct txgbe_hw *hw, u8 **mc_addr_ptr,
+ u32 *vmdq);
+
+/* Function pointer table */
+struct txgbe_eeprom_operations {
+ s32 (*init_params)(struct txgbe_hw *);
+ s32 (*read)(struct txgbe_hw *, u16, u16 *);
+ s32 (*read_buffer)(struct txgbe_hw *, u16, u16, u16 *);
+ s32 (*write)(struct txgbe_hw *, u16, u16);
+ s32 (*write_buffer)(struct txgbe_hw *, u16, u16, u16 *);
+ s32 (*validate_checksum)(struct txgbe_hw *, u16 *);
+ s32 (*update_checksum)(struct txgbe_hw *);
+ s32 (*calc_checksum)(struct txgbe_hw *);
+};
+
+struct txgbe_flash_operations {
+ s32 (*init_params)(struct txgbe_hw *);
+ s32 (*read_buffer)(struct txgbe_hw *, u32, u32, u32 *);
+ s32 (*write_buffer)(struct txgbe_hw *, u32, u32, u32 *);
+};
+
+struct txgbe_mac_operations {
+ s32 (*init_hw)(struct txgbe_hw *);
+ s32 (*reset_hw)(struct txgbe_hw *);
+ s32 (*start_hw)(struct txgbe_hw *);
+ s32 (*clear_hw_cntrs)(struct txgbe_hw *);
+ enum txgbe_media_type (*get_media_type)(struct txgbe_hw *);
+ s32 (*get_mac_addr)(struct txgbe_hw *, u8 *);
+ s32 (*get_san_mac_addr)(struct txgbe_hw *, u8 *);
+ s32 (*set_san_mac_addr)(struct txgbe_hw *, u8 *);
+ s32 (*get_device_caps)(struct txgbe_hw *, u16 *);
+ s32 (*get_wwn_prefix)(struct txgbe_hw *, u16 *, u16 *);
+ s32 (*stop_adapter)(struct txgbe_hw *);
+ s32 (*get_bus_info)(struct txgbe_hw *);
+ void (*set_lan_id)(struct txgbe_hw *);
+ s32 (*enable_rx_dma)(struct txgbe_hw *, u32);
+ s32 (*disable_sec_rx_path)(struct txgbe_hw *);
+ s32 (*enable_sec_rx_path)(struct txgbe_hw *);
+ s32 (*acquire_swfw_sync)(struct txgbe_hw *, u32);
+ void (*release_swfw_sync)(struct txgbe_hw *, u32);
+
+ /* Link */
+ void (*disable_tx_laser)(struct txgbe_hw *);
+ void (*enable_tx_laser)(struct txgbe_hw *);
+ void (*flap_tx_laser)(struct txgbe_hw *);
+ s32 (*setup_link)(struct txgbe_hw *, u32, bool);
+ s32 (*setup_mac_link)(struct txgbe_hw *, u32, bool);
+ s32 (*check_link)(struct txgbe_hw *, u32 *, bool *, bool);
+ s32 (*get_link_capabilities)(struct txgbe_hw *, u32 *,
+ bool *);
+ void (*set_rate_select_speed)(struct txgbe_hw *, u32);
+
+ /* Packet Buffer manipulation */
+ void (*setup_rxpba)(struct txgbe_hw *, int, u32, int);
+
+ /* LED */
+ s32 (*led_on)(struct txgbe_hw *, u32);
+ s32 (*led_off)(struct txgbe_hw *, u32);
+
+ /* RAR, Multicast, VLAN */
+ s32 (*set_rar)(struct txgbe_hw *, u32, u8 *, u64, u32);
+ s32 (*clear_rar)(struct txgbe_hw *, u32);
+ s32 (*insert_mac_addr)(struct txgbe_hw *, u8 *, u32);
+ s32 (*set_vmdq)(struct txgbe_hw *, u32, u32);
+ s32 (*set_vmdq_san_mac)(struct txgbe_hw *, u32);
+ s32 (*clear_vmdq)(struct txgbe_hw *, u32, u32);
+ s32 (*init_rx_addrs)(struct txgbe_hw *);
+ s32 (*update_uc_addr_list)(struct txgbe_hw *, u8 *, u32,
+ txgbe_mc_addr_itr);
+ s32 (*update_mc_addr_list)(struct txgbe_hw *, u8 *, u32,
+ txgbe_mc_addr_itr, bool clear);
+ s32 (*enable_mc)(struct txgbe_hw *);
+ s32 (*disable_mc)(struct txgbe_hw *);
+ s32 (*clear_vfta)(struct txgbe_hw *);
+ s32 (*set_vfta)(struct txgbe_hw *, u32, u32, bool);
+ s32 (*set_vlvf)(struct txgbe_hw *, u32, u32, bool, bool *);
+ s32 (*init_uta_tables)(struct txgbe_hw *);
+ void (*set_mac_anti_spoofing)(struct txgbe_hw *, bool, int);
+ void (*set_vlan_anti_spoofing)(struct txgbe_hw *, bool, int);
+
+ /* Flow Control */
+ s32 (*fc_enable)(struct txgbe_hw *);
+ s32 (*setup_fc)(struct txgbe_hw *);
+
+ /* Manageability interface */
+ s32 (*set_fw_drv_ver)(struct txgbe_hw *, u8, u8, u8, u8);
+ s32 (*get_thermal_sensor_data)(struct txgbe_hw *);
+ s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
+ void (*get_rtrup2tc)(struct txgbe_hw *hw, u8 *map);
+ void (*disable_rx)(struct txgbe_hw *hw);
+ void (*enable_rx)(struct txgbe_hw *hw);
+ void (*set_source_address_pruning)(struct txgbe_hw *, bool,
+ unsigned int);
+ void (*set_ethertype_anti_spoofing)(struct txgbe_hw *, bool, int);
+ s32 (*dmac_config)(struct txgbe_hw *hw);
+ s32 (*setup_eee)(struct txgbe_hw *hw, bool enable_eee);
+};
+
+struct txgbe_phy_operations {
+ s32 (*identify)(struct txgbe_hw *);
+ s32 (*identify_sfp)(struct txgbe_hw *);
+ s32 (*init)(struct txgbe_hw *);
+ s32 (*reset)(struct txgbe_hw *);
+ s32 (*read_reg)(struct txgbe_hw *, u32, u32, u16 *);
+ s32 (*write_reg)(struct txgbe_hw *, u32, u32, u16);
+ s32 (*read_reg_mdi)(struct txgbe_hw *, u32, u32, u16 *);
+ s32 (*write_reg_mdi)(struct txgbe_hw *, u32, u32, u16);
+ u32 (*setup_link)(struct txgbe_hw *, u32, bool);
+ s32 (*setup_internal_link)(struct txgbe_hw *);
+ u32 (*setup_link_speed)(struct txgbe_hw *, u32, bool);
+ s32 (*check_link)(struct txgbe_hw *, u32 *, bool *);
+ s32 (*get_firmware_version)(struct txgbe_hw *, u16 *);
+ s32 (*read_i2c_byte)(struct txgbe_hw *, u8, u8, u8 *);
+ s32 (*write_i2c_byte)(struct txgbe_hw *, u8, u8, u8);
+ s32 (*read_i2c_sff8472)(struct txgbe_hw *, u8, u8 *);
+ s32 (*read_i2c_eeprom)(struct txgbe_hw *, u8, u8 *);
+ s32 (*write_i2c_eeprom)(struct txgbe_hw *, u8, u8);
+ s32 (*check_overtemp)(struct txgbe_hw *);
+};
+
+struct txgbe_eeprom_info {
+ struct txgbe_eeprom_operations ops;
+ enum txgbe_eeprom_type type;
+ u32 semaphore_delay;
+ u16 word_size;
+ u16 address_bits;
+ u16 word_page_size;
+ u16 ctrl_word_3;
+ u16 sw_region_offset;
+};
+
+struct txgbe_flash_info {
+ struct txgbe_flash_operations ops;
+ u32 semaphore_delay;
+ u32 dword_size;
+ u16 address_bits;
+};
+
+
+#define TXGBE_FLAGS_DOUBLE_RESET_REQUIRED 0x01
+struct txgbe_mac_info {
+ struct txgbe_mac_operations ops;
+ u8 addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+ u8 perm_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+ u8 san_addr[TXGBE_ETH_LENGTH_OF_ADDRESS];
+ /* prefix for World Wide Node Name (WWNN) */
+ u16 wwnn_prefix;
+ /* prefix for World Wide Port Name (WWPN) */
+ u16 wwpn_prefix;
+#define TXGBE_MAX_MTA 128
+#define TXGBE_MAX_VFTA_ENTRIES 128
+ u32 mta_shadow[TXGBE_MAX_MTA];
+ s32 mc_filter_type;
+ u32 mcft_size;
+ u32 vft_shadow[TXGBE_MAX_VFTA_ENTRIES];
+ u32 vft_size;
+ u32 num_rar_entries;
+ u32 rar_highwater;
+ u32 rx_pb_size;
+ u32 max_tx_queues;
+ u32 max_rx_queues;
+ u32 orig_sr_pcs_ctl2;
+ u32 orig_sr_pma_mmd_ctl1;
+ u32 orig_sr_an_mmd_ctl;
+ u32 orig_sr_an_mmd_adv_reg2;
+ u32 orig_vr_xs_or_pcs_mmd_digi_ctl1;
+ u8 san_mac_rar_index;
+ bool get_link_status;
+ u16 max_msix_vectors;
+ bool arc_subsystem_valid;
+ bool orig_link_settings_stored;
+ bool autotry_restart;
+ u8 flags;
+ struct txgbe_thermal_sensor_data thermal_sensor_data;
+ bool thermal_sensor_enabled;
+ struct txgbe_dmac_config dmac_config;
+ bool set_lben;
+};
+
+struct txgbe_phy_info {
+ struct txgbe_phy_operations ops;
+ enum txgbe_phy_type type;
+ u32 addr;
+ u32 id;
+ enum txgbe_sfp_type sfp_type;
+ bool sfp_setup_needed;
+ u32 revision;
+ enum txgbe_media_type media_type;
+ u32 phy_semaphore_mask;
+ u8 lan_id; /* to be delete */
+ txgbe_autoneg_advertised autoneg_advertised;
+ enum txgbe_smart_speed smart_speed;
+ bool smart_speed_active;
+ bool multispeed_fiber;
+ bool reset_if_overtemp;
+ txgbe_physical_layer link_mode;
+};
+
+#include "txgbe_mbx.h"
+
+struct txgbe_mbx_operations {
+ void (*init_params)(struct txgbe_hw *hw);
+ s32 (*read)(struct txgbe_hw *, u32 *, u16, u16);
+ s32 (*write)(struct txgbe_hw *, u32 *, u16, u16);
+ s32 (*read_posted)(struct txgbe_hw *, u32 *, u16, u16);
+ s32 (*write_posted)(struct txgbe_hw *, u32 *, u16, u16);
+ s32 (*check_for_msg)(struct txgbe_hw *, u16);
+ s32 (*check_for_ack)(struct txgbe_hw *, u16);
+ s32 (*check_for_rst)(struct txgbe_hw *, u16);
+};
+
+struct txgbe_mbx_stats {
+ u32 msgs_tx;
+ u32 msgs_rx;
+
+ u32 acks;
+ u32 reqs;
+ u32 rsts;
+};
+
+struct txgbe_mbx_info {
+ struct txgbe_mbx_operations ops;
+ struct txgbe_mbx_stats stats;
+ u32 timeout;
+ u32 udelay;
+ u32 v2p_mailbox;
+ u16 size;
+};
+
+enum txgbe_reset_type {
+ TXGBE_LAN_RESET = 0,
+ TXGBE_SW_RESET,
+ TXGBE_GLOBAL_RESET
+};
+
+enum txgbe_link_status {
+ TXGBE_LINK_STATUS_NONE = 0,
+ TXGBE_LINK_STATUS_KX,
+ TXGBE_LINK_STATUS_KX4
+};
+
+struct txgbe_hw {
+ u8 __iomem *hw_addr;
+ void *back;
+ struct txgbe_mac_info mac;
+ struct txgbe_addr_filter_info addr_ctrl;
+ struct txgbe_fc_info fc;
+ struct txgbe_phy_info phy;
+ struct txgbe_eeprom_info eeprom;
+ struct txgbe_flash_info flash;
+ struct txgbe_bus_info bus;
+ struct txgbe_mbx_info mbx;
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+ bool adapter_stopped;
+ int api_version;
+ enum txgbe_reset_type reset_type;
+ bool force_full_reset;
+ bool allow_unsupported_sfp;
+ bool wol_enabled;
+#if defined(TXGBE_SUPPORT_KYLIN_FT)
+ bool Fdir_enabled;
+#endif
+ MTD_DEV phy_dev;
+ enum txgbe_link_status link_status;
+ u16 subsystem_id;
+ u16 tpid[8];
+};
+
+#define TCALL(hw, func, args...) (((hw)->func != NULL) \
+ ? (hw)->func((hw), ##args) : TXGBE_NOT_IMPLEMENTED)
+
+/* Error Codes */
+#define TXGBE_ERR 100
+#define TXGBE_NOT_IMPLEMENTED 0x7FFFFFFF
+/* (-TXGBE_ERR, TXGBE_ERR): reserved for non-txgbe defined error code */
+#define TXGBE_ERR_NOSUPP -(TXGBE_ERR+0)
+#define TXGBE_ERR_EEPROM -(TXGBE_ERR+1)
+#define TXGBE_ERR_EEPROM_CHECKSUM -(TXGBE_ERR+2)
+#define TXGBE_ERR_PHY -(TXGBE_ERR+3)
+#define TXGBE_ERR_CONFIG -(TXGBE_ERR+4)
+#define TXGBE_ERR_PARAM -(TXGBE_ERR+5)
+#define TXGBE_ERR_MAC_TYPE -(TXGBE_ERR+6)
+#define TXGBE_ERR_UNKNOWN_PHY -(TXGBE_ERR+7)
+#define TXGBE_ERR_LINK_SETUP -(TXGBE_ERR+8)
+#define TXGBE_ERR_ADAPTER_STOPPED -(TXGBE_ERR+09)
+#define TXGBE_ERR_INVALID_MAC_ADDR -(TXGBE_ERR+10)
+#define TXGBE_ERR_DEVICE_NOT_SUPPORTED -(TXGBE_ERR+11)
+#define TXGBE_ERR_MASTER_REQUESTS_PENDING -(TXGBE_ERR+12)
+#define TXGBE_ERR_INVALID_LINK_SETTINGS -(TXGBE_ERR+13)
+#define TXGBE_ERR_AUTONEG_NOT_COMPLETE -(TXGBE_ERR+14)
+#define TXGBE_ERR_RESET_FAILED -(TXGBE_ERR+15)
+#define TXGBE_ERR_SWFW_SYNC -(TXGBE_ERR+16)
+#define TXGBE_ERR_PHY_ADDR_INVALID -(TXGBE_ERR+17)
+#define TXGBE_ERR_I2C -(TXGBE_ERR+18)
+#define TXGBE_ERR_SFP_NOT_SUPPORTED -(TXGBE_ERR+19)
+#define TXGBE_ERR_SFP_NOT_PRESENT -(TXGBE_ERR+20)
+#define TXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT -(TXGBE_ERR+21)
+#define TXGBE_ERR_NO_SAN_ADDR_PTR -(TXGBE_ERR+22)
+#define TXGBE_ERR_FDIR_REINIT_FAILED -(TXGBE_ERR+23)
+#define TXGBE_ERR_EEPROM_VERSION -(TXGBE_ERR+24)
+#define TXGBE_ERR_NO_SPACE -(TXGBE_ERR+25)
+#define TXGBE_ERR_OVERTEMP -(TXGBE_ERR+26)
+#define TXGBE_ERR_UNDERTEMP -(TXGBE_ERR+27)
+#define TXGBE_ERR_FC_NOT_NEGOTIATED -(TXGBE_ERR+28)
+#define TXGBE_ERR_FC_NOT_SUPPORTED -(TXGBE_ERR+29)
+#define TXGBE_ERR_SFP_SETUP_NOT_COMPLETE -(TXGBE_ERR+30)
+#define TXGBE_ERR_PBA_SECTION -(TXGBE_ERR+31)
+#define TXGBE_ERR_INVALID_ARGUMENT -(TXGBE_ERR+32)
+#define TXGBE_ERR_HOST_INTERFACE_COMMAND -(TXGBE_ERR+33)
+#define TXGBE_ERR_OUT_OF_MEM -(TXGBE_ERR+34)
+#define TXGBE_ERR_FEATURE_NOT_SUPPORTED -(TXGBE_ERR+36)
+#define TXGBE_ERR_EEPROM_PROTECTED_REGION -(TXGBE_ERR+37)
+#define TXGBE_ERR_FDIR_CMD_INCOMPLETE -(TXGBE_ERR+38)
+#define TXGBE_ERR_FLASH_LOADING_FAILED -(TXGBE_ERR+39)
+#define TXGBE_ERR_XPCS_POWER_UP_FAILED -(TXGBE_ERR+40)
+#define TXGBE_ERR_FW_RESP_INVALID -(TXGBE_ERR+41)
+#define TXGBE_ERR_PHY_INIT_NOT_DONE -(TXGBE_ERR+42)
+#define TXGBE_ERR_TIMEOUT -(TXGBE_ERR+43)
+#define TXGBE_ERR_TOKEN_RETRY -(TXGBE_ERR+44)
+#define TXGBE_ERR_REGISTER -(TXGBE_ERR+45)
+#define TXGBE_ERR_MBX -(TXGBE_ERR+46)
+#define TXGBE_ERR_MNG_ACCESS_FAILED -(TXGBE_ERR+47)
+
+/**
+ * register operations
+ **/
+/* read register */
+#define TXGBE_DEAD_READ_RETRIES 10
+#define TXGBE_DEAD_READ_REG 0xdeadbeefU
+#define TXGBE_DEAD_READ_REG64 0xdeadbeefdeadbeefULL
+#define TXGBE_FAILED_READ_REG 0xffffffffU
+#define TXGBE_FAILED_READ_REG64 0xffffffffffffffffULL
+
+static inline bool TXGBE_REMOVED(void __iomem *addr)
+{
+ return unlikely(!addr);
+}
+
+static inline u32
+txgbe_rd32(u8 __iomem *base)
+{
+ return readl(base);
+}
+
+static inline u32
+rd32(struct txgbe_hw *hw, u32 reg)
+{
+ u8 __iomem *base = READ_ONCE(hw->hw_addr);
+ u32 val = TXGBE_FAILED_READ_REG;
+
+ if (unlikely(!base))
+ return val;
+
+ val = txgbe_rd32(base + reg);
+
+ return val;
+}
+#define rd32a(a, reg, offset) ( \
+ rd32((a), (reg) + ((offset) << 2)))
+
+static inline u32
+rd32m(struct txgbe_hw *hw, u32 reg, u32 mask)
+{
+ u8 __iomem *base = READ_ONCE(hw->hw_addr);
+ u32 val = TXGBE_FAILED_READ_REG;
+
+ if (unlikely(!base))
+ return val;
+
+ val = txgbe_rd32(base + reg);
+ if (unlikely(val == TXGBE_FAILED_READ_REG))
+ return val;
+
+ return val & mask;
+}
+
+/* write register */
+static inline void
+txgbe_wr32(u8 __iomem *base, u32 val)
+{
+ writel(val, base);
+}
+
+static inline void
+wr32(struct txgbe_hw *hw, u32 reg, u32 val)
+{
+ u8 __iomem *base = READ_ONCE(hw->hw_addr);
+
+ if (unlikely(!base))
+ return;
+
+ txgbe_wr32(base + reg, val);
+}
+#define wr32a(a, reg, off, val) \
+ wr32((a), (reg) + ((off) << 2), (val))
+
+static inline void
+wr32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 field)
+{
+ u8 __iomem *base = READ_ONCE(hw->hw_addr);
+ u32 val;
+
+ if (unlikely(!base))
+ return;
+
+ val = txgbe_rd32(base + reg);
+ if (unlikely(val == TXGBE_FAILED_READ_REG))
+ return;
+
+ val = ((val & ~mask) | (field & mask));
+ txgbe_wr32(base + reg, val);
+}
+
+/* poll register */
+#define TXGBE_MDIO_TIMEOUT 1000
+#define TXGBE_I2C_TIMEOUT 1000
+#define TXGBE_SPI_TIMEOUT 1000
+static inline s32
+po32m(struct txgbe_hw *hw, u32 reg,
+ u32 mask, u32 field, int usecs, int count)
+{
+ int loop;
+
+ loop = (count ? count : (usecs + 9) / 10);
+ usecs = (loop ? (usecs + loop - 1) / loop : 0);
+
+ count = loop;
+ do {
+ u32 value = rd32(hw, reg);
+ if ((value & mask) == (field & mask)) {
+ break;
+ }
+
+ if (loop-- <= 0)
+ break;
+
+ udelay(usecs);
+ } while (true);
+
+ return (count - loop <= count ? 0 : TXGBE_ERR_TIMEOUT);
+}
+
+#define TXGBE_WRITE_FLUSH(H) rd32(H, TXGBE_MIS_PWR)
+
+#endif /* _TXGBE_TYPE_H_ */
--
2.25.1
1
4

[PATCH kernel-4.19] crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
The driver crc32c-intel match CPUs supporting X86_FEATURE_XMM4_2.
On platforms with Zhaoxin CPUs supporting this X86 feature,
when crc32c-intel and crc32c-generic are both registered,
system will use crc32c-intel because its .cra_priority is greater
than crc32c-generic.
When doing lmbench3 Create and Delete file test on partitions with
ext4 enabling metadata checksum, found using crc32c-generic driver
could get about 20% performance gain than using the driver
crc32c-intel on some Zhaoxin CPUs.
This case expect to use crc32c-generic driver for these Zhaoxin CPUs
to get performance gain, so remove these Zhaoxin CPUs support from
crc32c-intel.
This patch was submitted to mainline kernel but not accepted by upstream
maintainer whose reason is "Then create a BUG flag for it,".
We think this is not a CPU bug for Zhaoxin CPUs. So should patch the
crc32c driver for Zhaoxin CPUs but not report a BUG.
https://lkml.org/lkml/2020/12/11/308
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/crypto/crc32c-intel_glue.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/crypto/crc32c-intel_glue.c
b/arch/x86/crypto/crc32c-intel_glue.c
index 5773e1161072..d994bd49761a 100644
--- a/arch/x86/crypto/crc32c-intel_glue.c
+++ b/arch/x86/crypto/crc32c-intel_glue.c
@@ -242,8 +242,13 @@ MODULE_DEVICE_TABLE(x86cpu, crc32c_cpu_id);
static int __init crc32c_intel_mod_init(void)
{
+ struct cpuinfo_x86 *c = &boot_cpu_data;
if (!x86_match_cpu(crc32c_cpu_id))
return -ENODEV;
+ if ((c->x86_vendor == X86_VENDOR_ZHAOXIN || c->x86_vendor ==
X86_VENDOR_CENTAUR) &&
+ (c->x86 <= 7 && c->x86_model <= 59)) {
+ return -ENODEV;
+ }
#ifdef CONFIG_X86_64
if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) {
alg.update = crc32c_pcl_intel_update;
--
2.20.1
3
2

26 Mar '21
Some Zhaoxin xHCI controllers follow usb3.1 spec,
but only support gen1 speed 5G. While in Linux kernel,
if xHCI suspport usb3.1,root hub speed will show on 10G.
To fix this issue, read usb speed ID supported by xHCI
to determine root hub speed.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 69617b8f5e00..af659b15258e 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5059,6 +5059,8 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
*/
struct device *dev = hcd->self.sysdev;
unsigned int minor_rev;
+ struct pci_dev *pdev = to_pci_dev(dev);
+ u8 ssp_support = 1, i, j;
int retval;
/* Accept arbitrarily long scatter-gather lists */
@@ -5113,9 +5115,27 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
break;
}
+
+ /* usb3.1 has gen1 and gen2, Some zx's xHCI controller that follow
usb3.1 spec
+ * but only support gen1
+ */
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) {
+ ssp_support = 0;
+ for (j = 0; j < xhci->num_port_caps; j++) {
+ for (i = 0; i < xhci->port_caps[j].psi_count; i++) {
+ if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5)
+ ssp_support = 1;
+ }
+ if (ssp_support != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
+ }
+ }
+ }
+
xhci_info(xhci, "Host supports USB 3.%x %sSuperSpeed\n",
minor_rev,
- minor_rev ? "Enhanced " : "");
+ ssp_support ? "Enhanced " : "");
xhci->usb3_rhub.hcd = hcd;
/* xHCI private pointer was set in xhci_pci_probe for the second
--
2.20.1
3
2

[PATCH kernel-4.19] xhci: fix issue of cross page boundary in TRB prefetch mechanism
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
On some Zhaoxin platforms, xHCI will prefetch TRB for performance
improvement. However this TRB prefetch mechanism may cross page boundary,
which may access memory not belong to xHCI. In order to fix this issue,
using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci-mem.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 9e87c282a743..f3c0eb0d4622 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -2385,6 +2385,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
{
dma_addr_t dma;
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ struct pci_dev *pdev = to_pci_dev(dev);
unsigned int val, val2;
u64 val_64;
u32 page_size, temp;
@@ -2450,8 +2451,13 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
* and our use of dma addresses in the trb_address_map radix tree needs
* TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
*/
- xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
- TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
+ /*With xHCI TRB prefetch patch:To fix cross page boundry access issue
in IOV environment*/
+ if ((pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && (pdev->device == 0x9202
|| pdev->device == 0x9203)) {
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2);
+ } else
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
/* See Table 46 and Note on Figure 55 */
xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
--
2.20.1
3
2

[PATCH kernel-4.19 0/2] Drop vendor dependency for X86_UMIP for Zhaoxin CPUs
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
All Zhaoxin family 7 CPUs support the UMIP feature. Since X86_INTEL_UMIP
and X86_AMD_UMIP both renamed to generic X86_UMIP, So remove the vendor
dependency for Zhaoxin CPUs.
LeoLiu-oc (2):
x86/Kconfig: Rename UMIP config parameter
x86/Kconfig: Drop vendor dependency for X86_UMIP
arch/x86/Kconfig | 15 +++++++--------
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/umip.h | 4 ++--
arch/x86/kernel/Makefile | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
5 files changed, 12 insertions(+), 13 deletions(-)
--
2.20.1
2
1

[PATCH kernel-4.19] x86/apic: Mask IOAPIC entries when disabling the local APIC
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
mainline inclusion
from mainline-5.6
commit 0f378d73d429d5f73fe2f00be4c9a15dbe9779ee
category: x86/apic
--------------------------------
When a system suspends, the local APIC is disabled in the suspend sequence,
but the IOAPIC is left in the current state. This means unmasked interrupt
lines stay unmasked. This is usually the case for IOAPIC pin 9 to which the
ACPI interrupt is connected.
That means that in suspended state the IOAPIC can respond to an external
interrupt, e.g. the wakeup via keyboard/RTC/ACPI, but the interrupt message
cannot be handled by the disabled local APIC. As a consequence the Remote
IRR bit is set, but the local APIC does not send an EOI to acknowledge
it. This causes the affected interrupt line to become stale and the stale
Remote IRR bit will cause a hang when __synchronize_hardirq() is invoked
for that interrupt line.
To prevent this, mask all IOAPIC entries before disabling the local
APIC. The resume code already has the unmask operation inside.
[ tglx: Massaged changelog ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Link:
https://lore.kernel.org/r/1579076539-7267-1-git-send-email-TonyWWang-oc@zha…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/apic/apic.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index cd216bdc9e90..f9c5efd07381 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2633,6 +2633,13 @@ static int lapic_suspend(void)
#endif
local_irq_save(flags);
+
+ /*
+ * Mask IOAPIC before disabling the local APIC to prevent stale IRR
+ * entries on some implementations.
+ */
+ mask_ioapic_entries();
+
disable_local_APIC();
irq_remapping_disable();
--
2.20.1
2
1

26 Mar '21
On Zhaoxin ZX-100 project, xHCI can't work normally after resume
from system Sx state. To fix this issue, when resume from system
sx state, reinitialize xHCI instead of restore.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci-pci.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index e1b2dec099f2..1812d28b117d 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -259,6 +259,9 @@ static void xhci_pci_quirks(struct device *dev,
struct xhci_hcd *xhci)
if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241)
xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN && pdev->device == 0x9202)
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+
if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM ||
pdev->vendor == PCI_VENDOR_ID_CAVIUM) &&
pdev->device == 0x9026)
--
2.20.1
2
1
Over Current condition is not standardized in the UHCI spec.
Zhaoxin UHCI controllers report Over Current active off.
Intel controllers report it active on, so we'll adjust the bit value.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/uhci-pci.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
index 0dd944277c99..3c0d4c43b640 100644
--- a/drivers/usb/host/uhci-pci.c
+++ b/drivers/usb/host/uhci-pci.c
@@ -134,6 +134,9 @@ static int uhci_pci_init(struct usb_hcd *hcd)
if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL)
device_set_wakeup_capable(uhci_dev(uhci), true);
+ if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_ZHAOXIN)
+ uhci->oc_low = 1;
+
/* Set up pointers to PCI-specific functions */
uhci->reset_hc = uhci_pci_reset_hc;
uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
--
2.20.1
2
1

26 Mar '21
Add LPM u1/u2 feature support for xHCI of zhaoxin
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci-pci.c | 4 ++++
drivers/usb/host/xhci.c | 34 ++++++++++++++++++++++++++++++++--
drivers/usb/host/xhci.h | 1 +
3 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 6b828d79a6d8..e1b2dec099f2 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -228,6 +228,10 @@ static void xhci_pci_quirks(struct device *dev,
struct xhci_hcd *xhci)
}
if (pdev->vendor == PCI_VENDOR_ID_VIA)
xhci->quirks |= XHCI_RESET_ON_RESUME;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) {
+ xhci->quirks |= XHCI_LPM_SUPPORT;
+ xhci->quirks |= XHCI_ZHAOXIN_HOST;
+ }
/* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */
if (pdev->vendor == PCI_VENDOR_ID_VIA &&
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index af659b15258e..fce053a27ade 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -4569,7 +4569,7 @@ static u16 xhci_calculate_u1_timeout(struct
xhci_hcd *xhci,
{
unsigned long long timeout_ns;
- if (xhci->quirks & XHCI_INTEL_HOST)
+ if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST))
timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
else
timeout_ns = udev->u1_params.sel;
@@ -4633,7 +4633,7 @@ static u16 xhci_calculate_u2_timeout(struct
xhci_hcd *xhci,
{
unsigned long long timeout_ns;
- if (xhci->quirks & XHCI_INTEL_HOST)
+ if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST))
timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
else
timeout_ns = udev->u2_params.sel;
@@ -4738,12 +4738,42 @@ static int xhci_check_intel_tier_policy(struct
usb_device *udev,
return -E2BIG;
}
+static int xhci_check_zhaoxin_tier_policy(struct usb_device *udev,
+ enum usb3_link_state state)
+{
+ struct usb_device *parent;
+ unsigned int num_hubs;
+ char *state_name;
+
+ if (state == USB3_LPM_U1)
+ state_name = "U1";
+ else if (state == USB3_LPM_U2)
+ state_name = "U2";
+ else
+ state_name = "Unknown";
+ /* Don't enable U1/U2 if the device is on an external hub*/
+ for (parent = udev->parent, num_hubs = 0; parent->parent;
+ parent = parent->parent)
+ num_hubs++;
+
+ if (num_hubs < 1)
+ return 0;
+
+ dev_dbg(&udev->dev, "Disabling %s link state for device" \
+ " below external hub.\n", state_name);
+ dev_dbg(&udev->dev, "Plug device into root port " \
+ "to decrease power consumption.\n");
+ return -E2BIG;
+}
+
static int xhci_check_tier_policy(struct xhci_hcd *xhci,
struct usb_device *udev,
enum usb3_link_state state)
{
if (xhci->quirks & XHCI_INTEL_HOST)
return xhci_check_intel_tier_policy(udev, state);
+ else if (xhci->quirks & XHCI_ZHAOXIN_HOST)
+ return xhci_check_zhaoxin_tier_policy(udev, state);
else
return 0;
}
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 7a4195f8cd1c..069390a1f2ac 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1872,6 +1872,7 @@ struct xhci_hcd {
#define XHCI_ZERO_64B_REGS BIT_ULL(32)
#define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)
#define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35)
+#define XHCI_ZHAOXIN_HOST BIT_ULL(36)
#define XHCI_DISABLE_SPARSE BIT_ULL(38)
unsigned int num_active_eps;
--
2.20.1
2
1
Lots of Zhaoxin PCIe components have no ACS Capability Structure, but do
have ACS-like capability which ensures DMA isolation.
This patch makes isolated devices could be directly assigned to
different VMs through IOMMU.
LeoLiu-oc (3):
PCI: Add Zhaoxin Vendor ID
PCI: Add ACS quirk for Zhaoxin multi-function devices
PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
drivers/pci/quirks.c | 31 +++++++++++++++++++++++++++++++
include/linux/pci_ids.h | 2 ++
2 files changed, 33 insertions(+)
--
2.20.1
2
1

26 Mar '21
Add Zhaoxin Serial ATA support for Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 395 +++++++++++++++++++++++++++++++++++++
3 files changed, 404 insertions(+)
create mode 100644 drivers/ata/sata_zhaoxin.c
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 99698d7fe585..20cec29cc4d3 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -478,6 +478,14 @@ config SATA_ULI
If unsure, say N.
+config SATA_ZHAOXIN
+ tristate "ZhaoXin SATA support"
+ depends on PCI
+ help
+ This option enables support for ZhaoXin Serial ATA.
+
+ If unsure, say N.
+
config SATA_VIA
tristate "VIA SATA support"
depends on PCI
diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile
index d21cdd83f7ab..2d9220311187 100644
--- a/drivers/ata/Makefile
+++ b/drivers/ata/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_SATA_SIL) += sata_sil.o
obj-$(CONFIG_SATA_SIS) += sata_sis.o
obj-$(CONFIG_SATA_SVW) += sata_svw.o
obj-$(CONFIG_SATA_ULI) += sata_uli.o
+obj-$(CONFIG_SATA_ZHAOXIN) += sata_zhaoxin.o
obj-$(CONFIG_SATA_VIA) += sata_via.o
obj-$(CONFIG_SATA_VITESSE) += sata_vsc.o
diff --git a/drivers/ata/sata_zhaoxin.c b/drivers/ata/sata_zhaoxin.c
new file mode 100644
index 000000000000..b0b16f6f364b
--- /dev/null
+++ b/drivers/ata/sata_zhaoxin.c
@@ -0,0 +1,395 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * sata_zhaoxin.c - ZhaoXin Serial ATA controllers
+ *
+ * Maintained by: Tejun Heo <tj(a)kernel.org>
+ * Please ALWAYS copy linux-ide(a)vger.kernel.org
+ * on emails.
+ *
+ * Copyright 2003-2004 Red Hat, Inc. All rights reserved.
+ * Copyright 2003-2004 Jeff Garzik
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * libata documentation is available via 'make {ps|pdf}docs',
+ * as Documentation/DocBook/libata.*
+ *
+ * Hardware documentation available under NDA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_zx"
+#define DRV_VERSION "2.6.1"
+
+enum board_ids_enum {
+ cnd001,
+};
+
+enum {
+ SATA_CHAN_ENAB = 0x40, /* SATA channel enable */
+ SATA_INT_GATE = 0x41, /* SATA interrupt gating */
+ SATA_NATIVE_MODE = 0x42, /* Native mode enable */
+ PATA_UDMA_TIMING = 0xB3, /* PATA timing for DMA/ cable detect */
+ PATA_PIO_TIMING = 0xAB, /* PATA timing register */
+
+ PORT0 = (1 << 1),
+ PORT1 = (1 << 0),
+ ALL_PORTS = PORT0 | PORT1,
+
+ NATIVE_MODE_ALL = (1 << 7) | (1 << 6) | (1 << 5) | (1 << 4),
+
+ SATA_EXT_PHY = (1 << 6), /* 0==use PATA, 1==ext phy */
+};
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent);
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val);
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val);
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
+
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile
*tf);
+
+static const struct pci_device_id szx_pci_tbl[] = {
+ { PCI_VDEVICE(ZHAOXIN, 0x9002), cnd001 },
+ { PCI_VDEVICE(ZHAOXIN, 0x9003), cnd001 },
+
+ { } /* terminate list */
+};
+
+static struct pci_driver szx_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = szx_pci_tbl,
+ .probe = szx_init_one,
+#ifdef CONFIG_PM_SLEEP
+ .suspend = ata_pci_device_suspend,
+ .resume = ata_pci_device_resume,
+#endif
+ .remove = ata_pci_remove_one,
+};
+
+static struct scsi_host_template szx_sht = {
+ ATA_BMDMA_SHT(DRV_NAME),
+};
+
+static struct ata_port_operations szx_base_ops = {
+ .inherits = &ata_bmdma_port_ops,
+ .sff_tf_load = szx_tf_load,
+};
+
+static struct ata_port_operations cnd001_ops = {
+ .inherits = &szx_base_ops,
+ .hardreset = szx_hardreset,
+ .scr_read = cnd001_scr_read,
+ .scr_write = cnd001_scr_write,
+};
+
+static struct ata_port_info cnd001_port_info = {
+ .flags = ATA_FLAG_SATA | ATA_FLAG_SLAVE_POSS,
+ .pio_mask = ATA_PIO4,
+ .mwdma_mask = ATA_MWDMA2,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &cnd001_ops,
+};
+
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_DESCRIPTION("SCSI low-level driver for ZX SATA controllers");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, szx_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
+{
+ int rc;
+
+ rc = sata_std_hardreset(link, class, deadline);
+ if (!rc || rc == -EAGAIN) {
+ struct ata_port *ap = link->ap;
+ int pmp = link->pmp;
+ int tmprc;
+
+ if (pmp) {
+ ap->ops->sff_dev_select(ap, pmp);
+ tmprc = ata_sff_wait_ready(&ap->link, deadline);
+ } else {
+ tmprc = ata_sff_wait_ready(link, deadline);
+ }
+ if (tmprc) {
+ ata_link_err(link, "COMRESET failed for wait (errno=%d)\n", rc);
+ } else {
+ ata_link_err(link, "wait for bsy success\n");
+ }
+ ata_link_err(link, "COMRESET success (errno=%d) ap=%d link %d\n", rc,
link->ap->port_no, link->pmp);
+ }else{
+ ata_link_err(link, "COMRESET failed (errno=%d) ap=%d link %d\n", rc,
link->ap->port_no, link->pmp);
+ }
+ return rc;
+}
+
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val)
+{
+ static const u8 ipm_tbl[] = { 1, 2, 6, 0 };
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+ u8 raw;
+
+ switch (scr) {
+ case SCR_STATUS:
+ pci_read_config_byte(pdev, 0xA0 + slot, &raw);
+
+ /* read the DET field, bit0 and 1 of the config byte */
+ v |= raw & 0x03;
+
+ /* read the SPD field, bit4 of the configure byte */
+ v |= raw & 0x30;
+
+ /* read the IPM field, bit2 and 3 of the config byte */
+ v |= ((ipm_tbl[(raw >> 2) & 0x3])<<8);
+ break;
+
+ case SCR_ERROR:
+ /* devices other than 5287 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_read_config_dword(pdev, 0xA8, &v);
+ break;
+
+ case SCR_CONTROL:
+ pci_read_config_byte(pdev, 0xA4 + slot, &raw);
+
+ /* read the DET field, bit0 and bit1 */
+ v |= ((raw & 0x02) << 1) | (raw & 0x01);
+
+ /* read the IPM field, bit2 and bit3 */
+ v |= ((raw >> 2) & 0x03) << 8;
+
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ *val = v;
+ return 0;
+}
+
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val)
+{
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+
+ WARN_ON(pdev == NULL);
+
+ switch (scr) {
+ case SCR_ERROR:
+ /* devices 0x9002 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_write_config_dword(pdev, 0xA8, val);
+ return 0;
+
+ case SCR_CONTROL:
+ /* set the DET field */
+ v |= ((val & 0x4) >> 1) | (val & 0x1);
+
+ /* set the IPM field */
+ v |= ((val >> 8) & 0x3) << 2;
+
+
+ pci_write_config_byte(pdev, 0xA4 + slot, v);
+
+
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+/**
+ * szx_tf_load - send taskfile registers to host controller
+ * @ap: Port to which output is sent
+ * @tf: ATA taskfile register set
+ *
+ * Outputs ATA taskfile to standard ATA host controller.
+ *
+ * This is to fix the internal bug of zx chipsets, which will
+ * reset the device register after changing the IEN bit on ctl
+ * register.
+ */
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
+{
+ struct ata_taskfile ttf;
+
+ if (tf->ctl != ap->last_ctl) {
+ ttf = *tf;
+ ttf.flags |= ATA_TFLAG_DEVICE;
+ tf = &ttf;
+ }
+ ata_sff_tf_load(ap, tf);
+}
+
+static const unsigned int szx_bar_sizes[] = {
+ 8, 4, 8, 4, 16, 256
+};
+
+static const unsigned int cnd001_bar_sizes0[] = {
+ 8, 4, 8, 4, 16, 0
+};
+
+static const unsigned int cnd001_bar_sizes1[] = {
+ 8, 4, 0, 0, 16, 0
+};
+
+static int cnd001_prepare_host(struct pci_dev *pdev, struct ata_host
**r_host)
+{
+ const struct ata_port_info *ppi0[] = { &cnd001_port_info, NULL };
+ const struct ata_port_info *ppi1[] = { &cnd001_port_info,
&ata_dummy_port_info};
+ struct ata_host *host;
+ int i, rc;
+
+ if (pdev->device == 0x9002)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi0, &host);
+ else if (pdev->device == 0x9003)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi1, &host);
+ else
+ rc = -EINVAL;
+ if (rc)
+ return rc;
+ *r_host = host;
+
+
+ /* cnd001 9002 hosts four sata ports as M/S of the two channels */
+ /* cnd001 9003 hosts two sata ports as M/S of the one channel */
+ for (i = 0; i < host->n_ports; i++)
+ ata_slave_link_init(host->ports[i]);
+
+ return 0;
+}
+
+static void szx_configure(struct pci_dev *pdev, int board_id)
+{
+ u8 tmp8;
+
+ pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &tmp8);
+ dev_info(&pdev->dev, "routed to hard irq line %d\n",
+ (int) (tmp8 & 0xf0) == 0xf0 ? 0 : tmp8 & 0x0f);
+
+ /* make sure SATA channels are enabled */
+ pci_read_config_byte(pdev, SATA_CHAN_ENAB, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channels (0x%x)\n",
+ (int)tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_CHAN_ENAB, tmp8);
+ }
+
+ /* make sure interrupts for each channel sent to us */
+ pci_read_config_byte(pdev, SATA_INT_GATE, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channel interrupts (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_INT_GATE, tmp8);
+ }
+
+ /* make sure native mode is enabled */
+ pci_read_config_byte(pdev, SATA_NATIVE_MODE, &tmp8);
+ if ((tmp8 & NATIVE_MODE_ALL) != NATIVE_MODE_ALL) {
+ dev_dbg(&pdev->dev,
+ "enabling SATA channel native mode (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= NATIVE_MODE_ALL;
+ pci_write_config_byte(pdev, SATA_NATIVE_MODE, tmp8);
+ }
+}
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent)
+{
+ unsigned int i;
+ int rc;
+ struct ata_host *host = NULL;
+ int board_id = (int) ent->driver_data;
+ const unsigned * bar_sizes;
+ int legacy_mode = 0;
+
+ ata_print_version_once(&pdev->dev, DRV_VERSION);
+
+ if (pdev->device == 0x9002 || pdev->device == 0x9003) {
+ if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
+ u8 tmp8, mask;
+
+ /* TODO: What if one channel is in native mode ... */
+ pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
+ mask = (1 << 2) | (1 << 0);
+ if ((tmp8 & mask) != mask)
+ legacy_mode = 1;
+ }
+ if (legacy_mode)
+ return -EINVAL;
+ }
+
+
+ rc = pcim_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ if (board_id == cnd001 && pdev->device == 0x9002)
+ bar_sizes = &cnd001_bar_sizes0[0];
+ else if (board_id == cnd001 && pdev->device == 0x9003)
+ bar_sizes = &cnd001_bar_sizes1[0];
+ else
+ bar_sizes = &szx_bar_sizes[0];
+
+ for (i = 0; i < ARRAY_SIZE(szx_bar_sizes); i++)
+ if ((pci_resource_start(pdev, i) == 0) ||
+ (pci_resource_len(pdev, i) < bar_sizes[i])) {
+ if (bar_sizes[i] == 0)
+ continue;
+ dev_err(&pdev->dev,
+ "invalid PCI BAR %u (sz 0x%llx, val 0x%llx)\n",
+ i,
+ (unsigned long long)pci_resource_start(pdev, i),
+ (unsigned long long)pci_resource_len(pdev, i));
+ return -ENODEV;
+ }
+
+ switch (board_id) {
+ case cnd001:
+ rc = cnd001_prepare_host(pdev, &host);
+ break;
+ default:
+ rc = -EINVAL;
+ }
+ if (rc)
+ return rc;
+
+ szx_configure(pdev, board_id);
+
+ pci_set_master(pdev);
+ return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
+ IRQF_SHARED, &szx_sht);
+}
+
+module_pci_driver(szx_pci_driver);
--
2.20.1
Add Zhaoxin Serial ATA support for Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/ata/Kconfig | 8 +
drivers/ata/Makefile | 1 +
drivers/ata/sata_zhaoxin.c | 395 +++++++++++++++++++++++++++++++++++++
3 files changed, 404 insertions(+)
create mode 100644 drivers/ata/sata_zhaoxin.c
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 99698d7fe585..20cec29cc4d3 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -478,6 +478,14 @@ config SATA_ULI
If unsure, say N.
+config SATA_ZHAOXIN
+ tristate "ZhaoXin SATA support"
+ depends on PCI
+ help
+ This option enables support for ZhaoXin Serial ATA.
+
+ If unsure, say N.
+
config SATA_VIA
tristate "VIA SATA support"
depends on PCI
diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile
index d21cdd83f7ab..2d9220311187 100644
--- a/drivers/ata/Makefile
+++ b/drivers/ata/Makefile
@@ -44,6 +44,7 @@ obj-$(CONFIG_SATA_SIL) += sata_sil.o
obj-$(CONFIG_SATA_SIS) += sata_sis.o
obj-$(CONFIG_SATA_SVW) += sata_svw.o
obj-$(CONFIG_SATA_ULI) += sata_uli.o
+obj-$(CONFIG_SATA_ZHAOXIN) += sata_zhaoxin.o
obj-$(CONFIG_SATA_VIA) += sata_via.o
obj-$(CONFIG_SATA_VITESSE) += sata_vsc.o
diff --git a/drivers/ata/sata_zhaoxin.c b/drivers/ata/sata_zhaoxin.c
new file mode 100644
index 000000000000..b0b16f6f364b
--- /dev/null
+++ b/drivers/ata/sata_zhaoxin.c
@@ -0,0 +1,395 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * sata_zhaoxin.c - ZhaoXin Serial ATA controllers
+ *
+ * Maintained by: Tejun Heo <tj(a)kernel.org>
+ * Please ALWAYS copy linux-ide(a)vger.kernel.org
+ * on emails.
+ *
+ * Copyright 2003-2004 Red Hat, Inc. All rights reserved.
+ * Copyright 2003-2004 Jeff Garzik
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * libata documentation is available via 'make {ps|pdf}docs',
+ * as Documentation/DocBook/libata.*
+ *
+ * Hardware documentation available under NDA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_zx"
+#define DRV_VERSION "2.6.1"
+
+enum board_ids_enum {
+ cnd001,
+};
+
+enum {
+ SATA_CHAN_ENAB = 0x40, /* SATA channel enable */
+ SATA_INT_GATE = 0x41, /* SATA interrupt gating */
+ SATA_NATIVE_MODE = 0x42, /* Native mode enable */
+ PATA_UDMA_TIMING = 0xB3, /* PATA timing for DMA/ cable detect */
+ PATA_PIO_TIMING = 0xAB, /* PATA timing register */
+
+ PORT0 = (1 << 1),
+ PORT1 = (1 << 0),
+ ALL_PORTS = PORT0 | PORT1,
+
+ NATIVE_MODE_ALL = (1 << 7) | (1 << 6) | (1 << 5) | (1 << 4),
+
+ SATA_EXT_PHY = (1 << 6), /* 0==use PATA, 1==ext phy */
+};
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent);
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val);
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val);
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
+
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile
*tf);
+
+static const struct pci_device_id szx_pci_tbl[] = {
+ { PCI_VDEVICE(ZHAOXIN, 0x9002), cnd001 },
+ { PCI_VDEVICE(ZHAOXIN, 0x9003), cnd001 },
+
+ { } /* terminate list */
+};
+
+static struct pci_driver szx_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = szx_pci_tbl,
+ .probe = szx_init_one,
+#ifdef CONFIG_PM_SLEEP
+ .suspend = ata_pci_device_suspend,
+ .resume = ata_pci_device_resume,
+#endif
+ .remove = ata_pci_remove_one,
+};
+
+static struct scsi_host_template szx_sht = {
+ ATA_BMDMA_SHT(DRV_NAME),
+};
+
+static struct ata_port_operations szx_base_ops = {
+ .inherits = &ata_bmdma_port_ops,
+ .sff_tf_load = szx_tf_load,
+};
+
+static struct ata_port_operations cnd001_ops = {
+ .inherits = &szx_base_ops,
+ .hardreset = szx_hardreset,
+ .scr_read = cnd001_scr_read,
+ .scr_write = cnd001_scr_write,
+};
+
+static struct ata_port_info cnd001_port_info = {
+ .flags = ATA_FLAG_SATA | ATA_FLAG_SLAVE_POSS,
+ .pio_mask = ATA_PIO4,
+ .mwdma_mask = ATA_MWDMA2,
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &cnd001_ops,
+};
+
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_DESCRIPTION("SCSI low-level driver for ZX SATA controllers");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, szx_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+static int szx_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
+{
+ int rc;
+
+ rc = sata_std_hardreset(link, class, deadline);
+ if (!rc || rc == -EAGAIN) {
+ struct ata_port *ap = link->ap;
+ int pmp = link->pmp;
+ int tmprc;
+
+ if (pmp) {
+ ap->ops->sff_dev_select(ap, pmp);
+ tmprc = ata_sff_wait_ready(&ap->link, deadline);
+ } else {
+ tmprc = ata_sff_wait_ready(link, deadline);
+ }
+ if (tmprc) {
+ ata_link_err(link, "COMRESET failed for wait (errno=%d)\n", rc);
+ } else {
+ ata_link_err(link, "wait for bsy success\n");
+ }
+ ata_link_err(link, "COMRESET success (errno=%d) ap=%d link %d\n", rc,
link->ap->port_no, link->pmp);
+ }else{
+ ata_link_err(link, "COMRESET failed (errno=%d) ap=%d link %d\n", rc,
link->ap->port_no, link->pmp);
+ }
+ return rc;
+}
+
+static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32
*val)
+{
+ static const u8 ipm_tbl[] = { 1, 2, 6, 0 };
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+ u8 raw;
+
+ switch (scr) {
+ case SCR_STATUS:
+ pci_read_config_byte(pdev, 0xA0 + slot, &raw);
+
+ /* read the DET field, bit0 and 1 of the config byte */
+ v |= raw & 0x03;
+
+ /* read the SPD field, bit4 of the configure byte */
+ v |= raw & 0x30;
+
+ /* read the IPM field, bit2 and 3 of the config byte */
+ v |= ((ipm_tbl[(raw >> 2) & 0x3])<<8);
+ break;
+
+ case SCR_ERROR:
+ /* devices other than 5287 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_read_config_dword(pdev, 0xA8, &v);
+ break;
+
+ case SCR_CONTROL:
+ pci_read_config_byte(pdev, 0xA4 + slot, &raw);
+
+ /* read the DET field, bit0 and bit1 */
+ v |= ((raw & 0x02) << 1) | (raw & 0x01);
+
+ /* read the IPM field, bit2 and bit3 */
+ v |= ((raw >> 2) & 0x03) << 8;
+
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ *val = v;
+ return 0;
+}
+
+static int cnd001_scr_write(struct ata_link *link, unsigned int scr,
u32 val)
+{
+ struct pci_dev *pdev = to_pci_dev(link->ap->host->dev);
+ int slot = 2 * link->ap->port_no + link->pmp;
+ u32 v = 0;
+
+ WARN_ON(pdev == NULL);
+
+ switch (scr) {
+ case SCR_ERROR:
+ /* devices 0x9002 uses 0xA8 as base */
+ WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003);
+ pci_write_config_byte(pdev, 0x42, slot);
+ pci_write_config_dword(pdev, 0xA8, val);
+ return 0;
+
+ case SCR_CONTROL:
+ /* set the DET field */
+ v |= ((val & 0x4) >> 1) | (val & 0x1);
+
+ /* set the IPM field */
+ v |= ((val >> 8) & 0x3) << 2;
+
+
+ pci_write_config_byte(pdev, 0xA4 + slot, v);
+
+
+ return 0;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+
+/**
+ * szx_tf_load - send taskfile registers to host controller
+ * @ap: Port to which output is sent
+ * @tf: ATA taskfile register set
+ *
+ * Outputs ATA taskfile to standard ATA host controller.
+ *
+ * This is to fix the internal bug of zx chipsets, which will
+ * reset the device register after changing the IEN bit on ctl
+ * register.
+ */
+static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
+{
+ struct ata_taskfile ttf;
+
+ if (tf->ctl != ap->last_ctl) {
+ ttf = *tf;
+ ttf.flags |= ATA_TFLAG_DEVICE;
+ tf = &ttf;
+ }
+ ata_sff_tf_load(ap, tf);
+}
+
+static const unsigned int szx_bar_sizes[] = {
+ 8, 4, 8, 4, 16, 256
+};
+
+static const unsigned int cnd001_bar_sizes0[] = {
+ 8, 4, 8, 4, 16, 0
+};
+
+static const unsigned int cnd001_bar_sizes1[] = {
+ 8, 4, 0, 0, 16, 0
+};
+
+static int cnd001_prepare_host(struct pci_dev *pdev, struct ata_host
**r_host)
+{
+ const struct ata_port_info *ppi0[] = { &cnd001_port_info, NULL };
+ const struct ata_port_info *ppi1[] = { &cnd001_port_info,
&ata_dummy_port_info};
+ struct ata_host *host;
+ int i, rc;
+
+ if (pdev->device == 0x9002)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi0, &host);
+ else if (pdev->device == 0x9003)
+ rc = ata_pci_bmdma_prepare_host(pdev, ppi1, &host);
+ else
+ rc = -EINVAL;
+ if (rc)
+ return rc;
+ *r_host = host;
+
+
+ /* cnd001 9002 hosts four sata ports as M/S of the two channels */
+ /* cnd001 9003 hosts two sata ports as M/S of the one channel */
+ for (i = 0; i < host->n_ports; i++)
+ ata_slave_link_init(host->ports[i]);
+
+ return 0;
+}
+
+static void szx_configure(struct pci_dev *pdev, int board_id)
+{
+ u8 tmp8;
+
+ pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &tmp8);
+ dev_info(&pdev->dev, "routed to hard irq line %d\n",
+ (int) (tmp8 & 0xf0) == 0xf0 ? 0 : tmp8 & 0x0f);
+
+ /* make sure SATA channels are enabled */
+ pci_read_config_byte(pdev, SATA_CHAN_ENAB, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channels (0x%x)\n",
+ (int)tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_CHAN_ENAB, tmp8);
+ }
+
+ /* make sure interrupts for each channel sent to us */
+ pci_read_config_byte(pdev, SATA_INT_GATE, &tmp8);
+ if ((tmp8 & ALL_PORTS) != ALL_PORTS) {
+ dev_dbg(&pdev->dev, "enabling SATA channel interrupts (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= ALL_PORTS;
+ pci_write_config_byte(pdev, SATA_INT_GATE, tmp8);
+ }
+
+ /* make sure native mode is enabled */
+ pci_read_config_byte(pdev, SATA_NATIVE_MODE, &tmp8);
+ if ((tmp8 & NATIVE_MODE_ALL) != NATIVE_MODE_ALL) {
+ dev_dbg(&pdev->dev,
+ "enabling SATA channel native mode (0x%x)\n",
+ (int) tmp8);
+ tmp8 |= NATIVE_MODE_ALL;
+ pci_write_config_byte(pdev, SATA_NATIVE_MODE, tmp8);
+ }
+}
+
+static int szx_init_one(struct pci_dev *pdev, const struct
pci_device_id *ent)
+{
+ unsigned int i;
+ int rc;
+ struct ata_host *host = NULL;
+ int board_id = (int) ent->driver_data;
+ const unsigned * bar_sizes;
+ int legacy_mode = 0;
+
+ ata_print_version_once(&pdev->dev, DRV_VERSION);
+
+ if (pdev->device == 0x9002 || pdev->device == 0x9003) {
+ if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
+ u8 tmp8, mask;
+
+ /* TODO: What if one channel is in native mode ... */
+ pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
+ mask = (1 << 2) | (1 << 0);
+ if ((tmp8 & mask) != mask)
+ legacy_mode = 1;
+ }
+ if (legacy_mode)
+ return -EINVAL;
+ }
+
+
+ rc = pcim_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ if (board_id == cnd001 && pdev->device == 0x9002)
+ bar_sizes = &cnd001_bar_sizes0[0];
+ else if (board_id == cnd001 && pdev->device == 0x9003)
+ bar_sizes = &cnd001_bar_sizes1[0];
+ else
+ bar_sizes = &szx_bar_sizes[0];
+
+ for (i = 0; i < ARRAY_SIZE(szx_bar_sizes); i++)
+ if ((pci_resource_start(pdev, i) == 0) ||
+ (pci_resource_len(pdev, i) < bar_sizes[i])) {
+ if (bar_sizes[i] == 0)
+ continue;
+ dev_err(&pdev->dev,
+ "invalid PCI BAR %u (sz 0x%llx, val 0x%llx)\n",
+ i,
+ (unsigned long long)pci_resource_start(pdev, i),
+ (unsigned long long)pci_resource_len(pdev, i));
+ return -ENODEV;
+ }
+
+ switch (board_id) {
+ case cnd001:
+ rc = cnd001_prepare_host(pdev, &host);
+ break;
+ default:
+ rc = -EINVAL;
+ }
+ if (rc)
+ return rc;
+
+ szx_configure(pdev, board_id);
+
+ pci_set_master(pdev);
+ return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,
+ IRQF_SHARED, &szx_sht);
+}
+
+module_pci_driver(szx_pci_driver);
--
2.20.1
2
1
New Zhaoxin family 7 CPUs are not affected by SPECTRE_V2, SWAPGS.
Extend cpu_vuln_whitelist flag with a NO_SPECTRE_V2 bit. And add
these CPUs to the cpu vulnerability whitelist.
LeoLiu-oc (2):
x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
Subject: [patch v1 2/2] x86/speculation/swapgs: Exclude Zhaoxin CPUs
from SWAPGS vulnerability
arch/x86/kernel/cpu/common.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--
2.20.1
2
1

[PATCH kernel-4.19 0/2] Disable WBINVD operation when entering C3 or above for Zhaoxin CPUs
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
All Zhaoxin CPUs that support C3 share cache. And caches should not be
flushed by software while entering C3 type state. And On all recent
Zhaoxin platforms, ARB_DISABLE is a nop. So, set bm_control to zero
to indicate that ARB_DISABLE is not required while entering C3 type
state.
LeoLiu-oc (2):
x86/power: Optimize C3 entry on Centaur CPUs
x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy
in C3
arch/x86/kernel/acpi/cstate.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
--
2.20.1
2
1

[PATCH kernel-4.19] ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC
by LeoLiu-oc 26 Mar '21
by LeoLiu-oc 26 Mar '21
26 Mar '21
mainline inclusion
from mainline-5.3
commit 773b2f30a3fc026f3ed121a8b945b0ae19b64ec5
category: ACPI
--------------------------------
Zhaoxin CPUs have NONSTOP TSC feature, so enable the ACPI
driver support for it.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: "hpa(a)zytor.com" <hpa(a)zytor.com>
Cc: "gregkh(a)linuxfoundation.org" <gregkh(a)linuxfoundation.org>
Cc: "rjw(a)rjwysocki.net" <rjw(a)rjwysocki.net>
Cc: "lenb(a)kernel.org" <lenb(a)kernel.org>
Cc: David Wang <DavidWang(a)zhaoxin.com>
Cc: "Cooper Yan(BJ-RD)" <CooperYan(a)zhaoxin.com>
Cc: "Qiyuan Wang(BJ-RD)" <QiyuanWang(a)zhaoxin.com>
Cc: "Herry Yang(BJ-RD)" <HerryYang(a)zhaoxin.com>
Link:
https://lkml.kernel.org/r/d1cfd937dabc44518d42038b55522c53@zhaoxin.com
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/acpi/acpi_pad.c | 1 +
drivers/acpi/processor_idle.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
index a47676a55b84..c06306e6ac92 100644
--- a/drivers/acpi/acpi_pad.c
+++ b/drivers/acpi/acpi_pad.c
@@ -73,6 +73,7 @@ static void power_saving_mwait_init(void)
case X86_VENDOR_HYGON:
case X86_VENDOR_AMD:
case X86_VENDOR_INTEL:
+ case X86_VENDOR_ZHAOXIN:
/*
* AMD Fam10h TSC will tick in all
* C/P/S0/S1 states when this bit is set.
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index b2131c4ea124..6336f956a144 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -209,6 +209,7 @@ static void tsc_check_state(int state)
case X86_VENDOR_AMD:
case X86_VENDOR_INTEL:
case X86_VENDOR_CENTAUR:
+ case X86_VENDOR_ZHAOXIN:
/*
* AMD Fam10h TSC will tick in all
* C/P/S0/S1 states when this bit is set.
--
2.20.1
2
1
This set of patches is to add support for Zhaoxin Family 7 CPUs.
With these patches, the kernel can identify Zhaoxin CPUs features
and Zhaoxin CPU topology information.
LeoLiu-oc (6):
x86/cpu: Create Zhaoxin processors architecture support file
x86/cpu: Remove redundant cpu_detect_cache_sizes() call
x86/cpu/centaur: Replace two-condition switch-case with an if
statement
x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
x86/cpufeatures: Add Zhaoxin feature bits
x86/cpu: Add detect extended topology for Zhaoxin CPUs
MAINTAINERS | 6 +
arch/x86/Kconfig.cpu | 13 +++
arch/x86/include/asm/cpufeatures.h | 21 ++++
arch/x86/include/asm/processor.h | 3 +-
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/centaur.c | 47 +++++---
arch/x86/kernel/cpu/zhaoxin.c | 170 +++++++++++++++++++++++++++++
7 files changed, 243 insertions(+), 18 deletions(-)
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
--
2.20.1
2
1

25 Mar '21
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3D58V
CVE: NA
----------------------------------
No unlock operation is performed on the mpam_devices_lock before the return statement, which may lead to a deadlock.
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Jian Cheng <cj.chengjian(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_device.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c
index 1aca24f570d3..d8511527970a 100644
--- a/arch/arm64/kernel/mpam/mpam_device.c
+++ b/arch/arm64/kernel/mpam/mpam_device.c
@@ -560,8 +560,10 @@ static void __init mpam_enable(struct work_struct *work)
mutex_lock(&mpam_devices_lock);
mpam_enable_squash_features();
err = mpam_allocate_config();
- if (err)
+ if (err) {
+ mutex_unlock(&mpam_devices_lock);
return;
+ }
mutex_unlock(&mpam_devices_lock);
mpam_enable_irqs();
--
2.25.1
3
2

25 Mar '21
x86/Kconfig: Rename UMIP config parameter
mainline inclusion
from mainline-5.7
commit bdb04a1abbf92c998f1afb5f00a037f2edaec1f7
category: x86/Kconfig
--------------------------------
Some Centaur family 7 CPUs and Zhaoxin family 7 CPUs support the UMIP
feature too. The text size growth which UMIP adds is ~1K and distro
kernels enable it anyway so remove the vendor dependency.
[ bp: Rewrite commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1583733990-2587-1-git-send-email-TonyWWang-oc@zha…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5e00e8900748..dd4dfb80ac6c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1869,7 +1869,6 @@ config X86_SMAP
config X86_UMIP
def_bool y
- depends on CPU_SUP_INTEL || CPU_SUP_AMD
prompt "User Mode Instruction Prevention" if EXPERT
---help---
User Mode Instruction Prevention (UMIP) is a security feature in
--
2.20.1
1
0

25 Mar '21
mainline inclusion
from mainline-5.5
commit b971880fe79f4042aaaf426744a5b19521bf77b3
category: x86/Kconfig
--------------------------------
AMD 2nd generation EPYC processors support the UMIP (User-Mode
Instruction Prevention) feature. So, rename X86_INTEL_UMIP to
generic X86_UMIP and modify the text to cover both Intel and AMD.
[ bp: take of the disabled-features.h copy in tools/ too. ]
Signed-off-by: Babu Moger <babu.moger(a)amd.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Ricardo Neri <ricardo.neri-calderon(a)linux.intel.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: "x86(a)kernel.org" <x86(a)kernel.org>
Link:
https://lkml.kernel.org/r/157298912544.17462.2018334793891409521.stgit@napl…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/Kconfig | 16 ++++++++--------
arch/x86/include/asm/disabled-features.h | 2 +-
arch/x86/include/asm/umip.h | 4 ++--
arch/x86/kernel/Makefile | 2 +-
tools/arch/x86/include/asm/disabled-features.h | 2 +-
5 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2b0695630031..5e00e8900748 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1867,16 +1867,16 @@ config X86_SMAP
If unsure, say Y.
-config X86_INTEL_UMIP
+config X86_UMIP
def_bool y
- depends on CPU_SUP_INTEL
- prompt "Intel User Mode Instruction Prevention" if EXPERT
+ depends on CPU_SUP_INTEL || CPU_SUP_AMD
+ prompt "User Mode Instruction Prevention" if EXPERT
---help---
- The User Mode Instruction Prevention (UMIP) is a security
- feature in newer Intel processors. If enabled, a general
- protection fault is issued if the SGDT, SLDT, SIDT, SMSW
- or STR instructions are executed in user mode. These instructions
- unnecessarily expose information about the hardware state.
+ User Mode Instruction Prevention (UMIP) is a security feature in
+ some x86 processors. If enabled, a general protection fault is
+ issued if the SGDT, SLDT, SIDT, SMSW or STR instructions are
+ executed in user mode. These instructions unnecessarily expose
+ information about the hardware state.
The vast majority of applications do not use these instructions.
For the very few that do, software emulation is provided in
diff --git a/arch/x86/include/asm/disabled-features.h
b/arch/x86/include/asm/disabled-features.h
index 33833d1909af..9d9da3487425 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -16,7 +16,7 @@
# define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31))
#endif
-#ifdef CONFIG_X86_INTEL_UMIP
+#ifdef CONFIG_X86_UMIP
# define DISABLE_UMIP 0
#else
# define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31))
diff --git a/arch/x86/include/asm/umip.h b/arch/x86/include/asm/umip.h
index db43f2a0d92c..aeed98c3c9e1 100644
--- a/arch/x86/include/asm/umip.h
+++ b/arch/x86/include/asm/umip.h
@@ -4,9 +4,9 @@
#include <linux/types.h>
#include <asm/ptrace.h>
-#ifdef CONFIG_X86_INTEL_UMIP
+#ifdef CONFIG_X86_UMIP
bool fixup_umip_exception(struct pt_regs *regs);
#else
static inline bool fixup_umip_exception(struct pt_regs *regs) { return
false; }
-#endif /* CONFIG_X86_INTEL_UMIP */
+#endif /* CONFIG_X86_UMIP */
#endif /* _ASM_X86_UMIP_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index da0b6bc090f3..66835d9a6f72 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -134,7 +134,7 @@ obj-$(CONFIG_EFI) += sysfb_efi.o
obj-$(CONFIG_PERF_EVENTS) += perf_regs.o
obj-$(CONFIG_TRACING) += tracepoint.o
obj-$(CONFIG_SCHED_MC_PRIO) += itmt.o
-obj-$(CONFIG_X86_INTEL_UMIP) += umip.o
+obj-$(CONFIG_X86_UMIP) += umip.o
obj-$(CONFIG_UNWINDER_ORC) += unwind_orc.o
obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o
diff --git a/tools/arch/x86/include/asm/disabled-features.h
b/tools/arch/x86/include/asm/disabled-features.h
index 33833d1909af..9d9da3487425 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -16,7 +16,7 @@
# define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31))
#endif
-#ifdef CONFIG_X86_INTEL_UMIP
+#ifdef CONFIG_X86_UMIP
# define DISABLE_UMIP 0
#else
# define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31))
--
2.20.1
1
0

[PATCH kernel-4.19 3/3] PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.6.9
commit 299bd044a6f332b4a6c8f708575c27cad70a35c1
category: PCI
Adapt to current kernel code
--------------------------------
Many Zhaoxin Root Ports and Switch Downstream Ports do provide ACS-like
capability but have no ACS Capability Structure. Peer-to-Peer transactions
could be blocked between these ports, so add quirk so devices behind them
could be assigned to different IOMMU group.
Link:
https://lore.kernel.org/r/20200327091148.5190-4-RaymondPang-oc@zhaoxin.com
Signed-off-by: Raymond Pang <RaymondPang-oc(a)zhaoxin.com>
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/pci/quirks.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 8f072a511fdd..5bfe2457aea9 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4281,6 +4281,31 @@
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0d,
PCI_CLASS_NOT_DEFINED
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0e,
PCI_CLASS_NOT_DEFINED, 8,
quirk_relaxedordering_disable);
+ /*
+ * Many Zhaoxin Root Ports and Switch Downstream Ports have no ACS
capability.
+ * But the implementation could block peer-to-peer transactions
between them
+ * and provide ACS-like functionality.
+ */
+static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16
acs_flags)
+{
+ u16 flags = (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV);
+ int ret = acs_flags & ~flags ? 0 : 1;
+
+ if (!pci_is_pcie(dev) ||
+ ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
+ (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
+ return -ENOTTY;
+
+ switch (dev->device) {
+ case 0x0710 ... 0x071e:
+ case 0x0721:
+ case 0x0723 ... 0x0732:
+ return ret;
+ }
+
+ return false;
+}
+
/*
* The AMD ARM A1100 (aka "SEATTLE") SoC has a bug in its PCIe Root
Complex
* where Upstream Transaction Layer Packets with the Relaxed Ordering
@@ -4771,6 +4796,8 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
+ /* Zhaoxin Root/Downstream Ports */
+ { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
{ 0 }
};
--
2.20.1
1
0

[PATCH kernel-4.19 2/3] PCI: Add ACS quirk for Zhaoxin multi-function devices
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.6.9
commit 0325837c51cb7c9a5bd3e354ac0c0cda0667d50e
category: PCI
--------------------------------
Some Zhaoxin endpoints are implemented as multi-function devices without an
ACS capability, but they actually don't support peer-to-peer transactions.
Add ACS quirks to declare DMA isolation.
Link:
https://lore.kernel.org/r/20200327091148.5190-3-RaymondPang-oc@zhaoxin.com
Signed-off-by: Raymond Pang <RaymondPang-oc(a)zhaoxin.com>
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/pci/quirks.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index edb6138a9e7f..8f072a511fdd 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4767,6 +4767,10 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs },
{ PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs },
{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ /* Zhaoxin multi-function devices */
+ { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
{ 0 }
};
--
2.20.1
1
0
mainline inclusion
from mainline-5.6.9
commit 3375590623e4a132b19a8740512f4deb95728933
category: PCI
--------------------------------
Add Zhaoxin Vendor ID to pci_ids.h
Link:
https://lore.kernel.org/r/20200327091148.5190-2-RaymondPang-oc@zhaoxin.com
Signed-off-by: Raymond Pang <RaymondPang-oc(a)zhaoxin.com>
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
include/linux/pci_ids.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 277d8f87d551..31e226dcd923 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2596,6 +2596,8 @@
#define PCI_VENDOR_ID_AMAZON 0x1d0f
+#define PCI_VENDOR_ID_ZHAOXIN 0x1d17
+
#define PCI_VENDOR_ID_HYGON 0x1d94
#define PCI_VENDOR_ID_TEKRAM 0x1de1
--
2.20.1
1
0

25 Mar '21
Add Zhaoxin NB HDAC codec support.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
sound/pci/hda/patch_hdmi.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
index d21a4eb1ca49..8a10e660c616 100644
--- a/sound/pci/hda/patch_hdmi.c
+++ b/sound/pci/hda/patch_hdmi.c
@@ -3843,6 +3843,20 @@ static int patch_via_hdmi(struct hda_codec *codec)
return patch_simple_hdmi(codec, VIAHDMI_CVT_NID, VIAHDMI_PIN_NID);
}
+/* ZHAOXIN HDMI Implementation */
+static int patch_zx_hdmi(struct hda_codec *codec)
+{
+ int err;
+
+ err = patch_generic_hdmi(codec);
+ codec->no_sticky_stream = 1;
+
+ if (err)
+ return err;
+
+ return 0;
+}
+
/*
* patch entries
*/
@@ -3932,6 +3946,12 @@ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP",
patch_via_hdmi),
HDA_CODEC_ENTRY(0x11069f81, "VX900 HDMI/DP", patch_via_hdmi),
HDA_CODEC_ENTRY(0x11069f84, "VX11 HDMI/DP", patch_generic_hdmi),
HDA_CODEC_ENTRY(0x11069f85, "VX11 HDMI/DP", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x11069f86, "CND001 HDMI/DP", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x11069f87, "CND001 HDMI/DP", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x11069f88, "CHX001 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x11069f89, "CHX001 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x11069f8a, "CHX002 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x11069f8b, "CHX002 HDMI/DP", patch_zx_hdmi),
HDA_CODEC_ENTRY(0x80860054, "IbexPeak HDMI", patch_i915_cpt_hdmi),
HDA_CODEC_ENTRY(0x80862801, "Bearlake HDMI", patch_generic_hdmi),
HDA_CODEC_ENTRY(0x80862802, "Cantiga HDMI", patch_generic_hdmi),
@@ -3951,6 +3971,12 @@ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI",
patch_generic_hdmi),
HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI", patch_i915_byt_hdmi),
HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI", patch_i915_byt_hdmi),
HDA_CODEC_ENTRY(0x808629fb, "Crestline HDMI", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x1d179f86, "CND001 HDMI/DP", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x1d179f87, "CND001 HDMI/DP", patch_generic_hdmi),
+HDA_CODEC_ENTRY(0x1d179f88, "CHX001 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x1d179f89, "CHX001 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x1d179f8a, "CHX002 HDMI/DP", patch_zx_hdmi),
+HDA_CODEC_ENTRY(0x1d179f8b, "CHX002 HDMI/DP", patch_zx_hdmi),
/* special ID for generic HDMI */
HDA_CODEC_ENTRY(HDA_CODEC_ID_GENERIC_HDMI, "Generic HDMI",
patch_generic_hdmi),
{} /* terminator */
--
2.20.1
1
0

25 Mar '21
Add the new PCI ID 0x1d17 0x3288 Zhaoxin SB HDAC support.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
sound/pci/hda/hda_intel.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index 2cd8bfd5293b..67791114471c 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -250,7 +250,8 @@ MODULE_SUPPORTED_DEVICE("{{Intel, ICH6},"
"{VIA, VT8251},"
"{VIA, VT8237A},"
"{SiS, SIS966},"
- "{ULI, M5461}}");
+ "{ULI, M5461},"
+ "{ZX, ZhaoxinHDA}}");
MODULE_DESCRIPTION("Intel HDA driver");
#if defined(CONFIG_PM) && defined(CONFIG_VGA_SWITCHEROO)
@@ -281,6 +282,7 @@ enum {
AZX_DRIVER_CTX,
AZX_DRIVER_CTHDA,
AZX_DRIVER_CMEDIA,
+ AZX_DRIVER_ZHAOXIN,
AZX_DRIVER_GENERIC,
AZX_NUM_DRIVERS, /* keep this as last entry */
};
@@ -401,6 +403,7 @@ static char *driver_short_names[] = {
[AZX_DRIVER_CTX] = "HDA Creative",
[AZX_DRIVER_CTHDA] = "HDA Creative",
[AZX_DRIVER_CMEDIA] = "HDA C-Media",
+ [AZX_DRIVER_ZHAOXIN] = "HDA Zhaoxin",
[AZX_DRIVER_GENERIC] = "HD-Audio Generic",
};
@@ -1599,6 +1602,9 @@ static int check_position_fix(struct azx *chip,
int fix)
dev_dbg(chip->card->dev, "Using FIFO position fix\n");
return POS_FIX_FIFO;
}
+ if (chip->driver_type == AZX_DRIVER_ZHAOXIN) {
+ return POS_FIX_VIACOMBO;
+ }
if (chip->driver_caps & AZX_DCAPS_POSFIX_LPIB) {
dev_dbg(chip->card->dev, "Using LPIB position fix\n");
return POS_FIX_LPIB;
@@ -1755,6 +1761,15 @@ static void azx_check_snoop_available(struct azx
*chip)
snoop = false;
}
+ if (azx_get_snoop_type(chip) == AZX_SNOOP_TYPE_NONE &&
+ chip->driver_type == AZX_DRIVER_ZHAOXIN) {
+ u8 val1;
+ pci_read_config_byte(chip->pci, 0x42, &val1);
+ if (!(val1 & 0x80) && chip->pci->revision == 0x20) {
+ snoop = false;
+ }
+ }
+
if (chip->driver_caps & AZX_DCAPS_SNOOP_OFF)
snoop = false;
@@ -2811,6 +2826,8 @@ static const struct pci_device_id azx_ids[] = {
.class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
.class_mask = 0xffffff,
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI },
+ /* Zhaoxin */
+ { PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, azx_ids);
--
2.20.1
1
0

[PATCH kernel-4.19] x86/perf: Add hardware performance events support for Zhaoxin CPU.
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.8
commit 3a4ac121c2cacbf97d493fa3bc42ead88657abe4
category: x86/perf
Add the generic Zhaoxin uncore PMU support.
--------------------------------
Zhaoxin CPU has provided facilities for monitoring performance
via PMU (Performance Monitor Unit), but the functionality is unused so far.
Therefore, add support for zhaoxin pmu to make performance related
hardware events available.
The PMU is mostly an Intel Architectural PerfMon-v2 with a novel
errata for the ZXC line. It supports the following events:
-----------------------------------------------------------------------------------------------------------------------------------
Event | Event | Umask | Description
| Select | |
-----------------------------------------------------------------------------------------------------------------------------------
cpu-cycles | 82h | 00h | unhalt core clock
instructions | 00h | 00h | number of instructions
at retirement.
cache-references | 15h | 05h | number of fillq pushs
at the current cycle.
cache-misses | 1ah | 05h | number of l2 miss
pushed by fillq.
branch-instructions | 28h | 00h | counts the number of
branch instructions retired.
branch-misses | 29h | 00h | mispredicted branch
instructions at retirement.
bus-cycles | 83h | 00h | unhalt bus clock
stalled-cycles-frontend | 01h | 01h | Increments each cycle
the # of Uops issued by the RAT to RS.
stalled-cycles-backend | 0fh | 04h | RS0/1/2/3/45 empty
L1-dcache-loads | 68h | 05h | number of retire/commit
load.
L1-dcache-load-misses | 4bh | 05h | retired load uops whose
data source followed an L1 miss.
L1-dcache-stores | 69h | 06h | number of retire/commit
Store,no LEA
L1-dcache-store-misses | 62h | 05h | cache lines in M state
evicted out of L1D due to Snoop HitM or dirty line replacement.
L1-icache-loads | 00h | 03h | number of l1i cache
access for valid normal fetch,including un-cacheable access.
L1-icache-load-misses | 01h | 03h | number of l1i cache
miss for valid normal fetch,including un-cacheable miss.
L1-icache-prefetches | 0ah | 03h | number of prefetch.
L1-icache-prefetch-misses | 0bh | 03h | number of prefetch miss.
dTLB-loads | 68h | 05h | number of retire/commit
load
dTLB-load-misses | 2ch | 05h | number of load
operations miss all level tlbs and cause a tablewalk.
dTLB-stores | 69h | 06h | number of retire/commit
Store,no LEA
dTLB-store-misses | 30h | 05h | number of store
operations miss all level tlbs and cause a tablewalk.
dTLB-prefetches | 64h | 05h | number of hardware pte
prefetch requests dispatched out of the prefetch FIFO.
dTLB-prefetch-misses | 65h | 05h | number of hardware pte
prefetch requests miss the l1d data cache.
iTLB-load | 00h | 00h | actually counter
instructions.
iTLB-load-misses | 34h | 05h | number of code
operations miss all level tlbs and cause a tablewalk.
-----------------------------------------------------------------------------------------------------------------------------------
Reported-by: kbuild test robot <lkp(a)intel.com>
Signed-off-by: CodyYao-oc <CodyYao-oc(a)zhaoxin.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link:
https://lkml.kernel.org/r/1586747669-4827-1-git-send-email-CodyYao-oc@zhaox…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/events/Makefile | 2 +
arch/x86/events/core.c | 4 +
arch/x86/events/perf_event.h | 14 +-
arch/x86/events/zhaoxin/Makefile | 3 +
arch/x86/events/zhaoxin/core.c | 612 +++++++++++++
arch/x86/events/zhaoxin/uncore.c | 1101 ++++++++++++++++++++++++
arch/x86/events/zhaoxin/uncore.h | 308 +++++++
arch/x86/kernel/cpu/perfctr-watchdog.c | 8 +
8 files changed, 2051 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/events/zhaoxin/Makefile
create mode 100644 arch/x86/events/zhaoxin/core.c
create mode 100644 arch/x86/events/zhaoxin/uncore.c
create mode 100644 arch/x86/events/zhaoxin/uncore.h
diff --git a/arch/x86/events/Makefile b/arch/x86/events/Makefile
index b8ccdb5c9244..ad4a7c789637 100644
--- a/arch/x86/events/Makefile
+++ b/arch/x86/events/Makefile
@@ -2,3 +2,5 @@ obj-y += core.o
obj-y += amd/
obj-$(CONFIG_X86_LOCAL_APIC) += msr.o
obj-$(CONFIG_CPU_SUP_INTEL) += intel/
+obj-$(CONFIG_CPU_SUP_CENTAUR) += zhaoxin/
+obj-$(CONFIG_CPU_SUP_ZHAOXIN) += zhaoxin/
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 8e8970dd1af1..640f85da2b34 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1758,6 +1758,10 @@ static int __init init_hw_perf_events(void)
err = amd_pmu_init();
x86_pmu.name = "HYGON";
break;
+ case X86_VENDOR_ZHAOXIN:
+ case X86_VENDOR_CENTAUR:
+ err = zhaoxin_pmu_init();
+ break;
default:
err = -ENOTSUPP;
}
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 05659c7b43d4..dd24cac3d5e5 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -565,9 +565,12 @@ struct x86_pmu {
struct event_constraint *event_constraints;
struct x86_pmu_quirk *quirks;
int perfctr_second_write;
- bool late_ack;
u64 (*limit_period)(struct perf_event *event, u64 l);
+ /* PMI handler bits */
+ unsigned int late_ack :1,
+ enabled_ack :1,
+ counter_freezing :1;
/*
* sysfs attrs
*/
@@ -1044,3 +1047,12 @@ static inline int is_ht_workaround_enabled(void)
return 0;
}
#endif /* CONFIG_CPU_SUP_INTEL */
+
+#if ((defined CONFIG_CPU_SUP_CENTAUR) || (defined CONFIG_CPU_ZHAOXIN))
+int zhaoxin_pmu_init(void);
+#else
+static inline int zhaoxin_pmu_init(void)
+{
+ return 0;
+}
+#endif /*CONFIG_CPU_SUP_CENTAUR or CONFIG_CPU_SUP_ZHAOXIN*/
diff --git a/arch/x86/events/zhaoxin/Makefile
b/arch/x86/events/zhaoxin/Makefile
new file mode 100644
index 000000000000..767d6212bac1
--- /dev/null
+++ b/arch/x86/events/zhaoxin/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y += core.o
+obj-y += uncore.o
diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
new file mode 100644
index 000000000000..c2e5bdf3893d
--- /dev/null
+++ b/arch/x86/events/zhaoxin/core.c
@@ -0,0 +1,612 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Zhaoxin PMU;
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/stddef.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/export.h>
+#include <linux/nmi.h>
+
+#include <asm/cpufeature.h>
+#include <asm/hardirq.h>
+#include <asm/apic.h>
+
+#include "../perf_event.h"
+
+/*
+ * Zhaoxin PerfMon, used on zxc and later.
+ */
+static u64 zx_pmon_event_map[PERF_COUNT_HW_MAX] __read_mostly = {
+
+ [PERF_COUNT_HW_CPU_CYCLES] = 0x0082,
+ [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0515,
+ [PERF_COUNT_HW_CACHE_MISSES] = 0x051a,
+ [PERF_COUNT_HW_BUS_CYCLES] = 0x0083,
+};
+
+static struct event_constraint zxc_event_constraints[] __read_mostly = {
+
+ FIXED_EVENT_CONSTRAINT(0x0082, 1), /* unhalted core clock cycles */
+ EVENT_CONSTRAINT_END
+};
+
+static struct event_constraint zxd_event_constraints[] __read_mostly = {
+
+ FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* retired instructions */
+ FIXED_EVENT_CONSTRAINT(0x0082, 1), /* unhalted core clock cycles */
+ FIXED_EVENT_CONSTRAINT(0x0083, 2), /* unhalted bus clock cycles */
+ EVENT_CONSTRAINT_END
+};
+
+static __initconst const u64 zxd_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+[C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0042,
+ [C(RESULT_MISS)] = 0x0538,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x0043,
+ [C(RESULT_MISS)] = 0x0562,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0300,
+ [C(RESULT_MISS)] = 0x0301,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x030a,
+ [C(RESULT_MISS)] = 0x030b,
+ },
+},
+[C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(DTLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0042,
+ [C(RESULT_MISS)] = 0x052c,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x0043,
+ [C(RESULT_MISS)] = 0x0530,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0564,
+ [C(RESULT_MISS)] = 0x0565,
+ },
+},
+[C(ITLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x00c0,
+ [C(RESULT_MISS)] = 0x0534,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(BPU)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0700,
+ [C(RESULT_MISS)] = 0x0709,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(NODE)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+};
+
+static __initconst const u64 zxe_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+[C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0568,
+ [C(RESULT_MISS)] = 0x054b,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x0669,
+ [C(RESULT_MISS)] = 0x0562,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0300,
+ [C(RESULT_MISS)] = 0x0301,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x030a,
+ [C(RESULT_MISS)] = 0x030b,
+ },
+},
+[C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+},
+[C(DTLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0568,
+ [C(RESULT_MISS)] = 0x052c,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x0669,
+ [C(RESULT_MISS)] = 0x0530,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0564,
+ [C(RESULT_MISS)] = 0x0565,
+ },
+},
+[C(ITLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x00c0,
+ [C(RESULT_MISS)] = 0x0534,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(BPU)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0028,
+ [C(RESULT_MISS)] = 0x0029,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+[C(NODE)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+},
+};
+
+static void zhaoxin_pmu_disable_all(void)
+{
+ wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+}
+
+static void zhaoxin_pmu_enable_all(int added)
+{
+ wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, x86_pmu.intel_ctrl);
+}
+
+static inline u64 zhaoxin_pmu_get_status(void)
+{
+ u64 status;
+
+ rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status);
+
+ return status;
+}
+
+static inline void zhaoxin_pmu_ack_status(u64 ack)
+{
+ wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, ack);
+}
+
+static inline void zxc_pmu_ack_status(u64 ack)
+{
+ /*
+ * ZXC needs global control enabled in order to clear status bits.
+ */
+ zhaoxin_pmu_enable_all(0);
+ zhaoxin_pmu_ack_status(ack);
+ zhaoxin_pmu_disable_all();
+}
+
+static void zhaoxin_pmu_disable_fixed(struct hw_perf_event *hwc)
+{
+ int idx = hwc->idx - INTEL_PMC_IDX_FIXED;
+ u64 ctrl_val, mask;
+
+ mask = 0xfULL << (idx * 4);
+
+ rdmsrl(hwc->config_base, ctrl_val);
+ ctrl_val &= ~mask;
+ wrmsrl(hwc->config_base, ctrl_val);
+}
+
+static void zhaoxin_pmu_disable_event(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
+ zhaoxin_pmu_disable_fixed(hwc);
+ return;
+ }
+
+ x86_pmu_disable_event(event);
+}
+
+static void zhaoxin_pmu_enable_fixed(struct hw_perf_event *hwc)
+{
+ int idx = hwc->idx - INTEL_PMC_IDX_FIXED;
+ u64 ctrl_val, bits, mask;
+
+ /*
+ * Enable IRQ generation (0x8),
+ * and enable ring-3 counting (0x2) and ring-0 counting (0x1)
+ * if requested:
+ */
+ bits = 0x8ULL;
+ if (hwc->config & ARCH_PERFMON_EVENTSEL_USR)
+ bits |= 0x2;
+ if (hwc->config & ARCH_PERFMON_EVENTSEL_OS)
+ bits |= 0x1;
+
+ bits <<= (idx * 4);
+ mask = 0xfULL << (idx * 4);
+
+ rdmsrl(hwc->config_base, ctrl_val);
+ ctrl_val &= ~mask;
+ ctrl_val |= bits;
+ wrmsrl(hwc->config_base, ctrl_val);
+}
+
+static void zhaoxin_pmu_enable_event(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
+ zhaoxin_pmu_enable_fixed(hwc);
+ return;
+ }
+
+ __x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE);
+}
+
+/*
+ * This handler is triggered by the local APIC, so the APIC IRQ handling
+ * rules apply:
+ */
+static int zhaoxin_pmu_handle_irq(struct pt_regs *regs)
+{
+ struct perf_sample_data data;
+ struct cpu_hw_events *cpuc;
+ int handled = 0;
+ u64 status;
+ int bit;
+
+ cpuc = this_cpu_ptr(&cpu_hw_events);
+ apic_write(APIC_LVTPC, APIC_DM_NMI);
+ zhaoxin_pmu_disable_all();
+ status = zhaoxin_pmu_get_status();
+ if (!status)
+ goto done;
+
+again:
+ if (x86_pmu.enabled_ack)
+ zxc_pmu_ack_status(status);
+ else
+ zhaoxin_pmu_ack_status(status);
+
+ inc_irq_stat(apic_perf_irqs);
+
+ /*
+ * CondChgd bit 63 doesn't mean any overflow status. Ignore
+ * and clear the bit.
+ */
+ if (__test_and_clear_bit(63, (unsigned long *)&status)) {
+ if (!status)
+ goto done;
+ }
+
+ for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
+ struct perf_event *event = cpuc->events[bit];
+
+ handled++;
+
+ if (!test_bit(bit, cpuc->active_mask))
+ continue;
+
+ x86_perf_event_update(event);
+ perf_sample_data_init(&data, 0, event->hw.last_period);
+
+ if (!x86_perf_event_set_period(event))
+ continue;
+
+ if (perf_event_overflow(event, &data, regs))
+ x86_pmu_stop(event, 0);
+ }
+
+ /*
+ * Repeat if there is more work to be done:
+ */
+ status = zhaoxin_pmu_get_status();
+ if (status)
+ goto again;
+
+done:
+ zhaoxin_pmu_enable_all(0);
+ return handled;
+}
+
+static u64 zhaoxin_pmu_event_map(int hw_event)
+{
+ return zx_pmon_event_map[hw_event];
+}
+
+static struct event_constraint *
+zhaoxin_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ struct perf_event *event)
+{
+ struct event_constraint *c;
+
+ if (x86_pmu.event_constraints) {
+ for_each_event_constraint(c, x86_pmu.event_constraints) {
+ if ((event->hw.config & c->cmask) == c->code)
+ return c;
+ }
+ }
+
+ return &unconstrained;
+}
+
+PMU_FORMAT_ATTR(event, "config:0-7");
+PMU_FORMAT_ATTR(umask, "config:8-15");
+PMU_FORMAT_ATTR(edge, "config:18");
+PMU_FORMAT_ATTR(inv, "config:23");
+PMU_FORMAT_ATTR(cmask, "config:24-31");
+
+static struct attribute *zx_arch_formats_attr[] = {
+ &format_attr_event.attr,
+ &format_attr_umask.attr,
+ &format_attr_edge.attr,
+ &format_attr_inv.attr,
+ &format_attr_cmask.attr,
+ NULL,
+};
+
+static ssize_t zhaoxin_event_sysfs_show(char *page, u64 config)
+{
+ u64 event = (config & ARCH_PERFMON_EVENTSEL_EVENT);
+
+ return x86_event_sysfs_show(page, config, event);
+}
+
+static const struct x86_pmu zhaoxin_pmu __initconst = {
+ .name = "zhaoxin_pmu",
+ .handle_irq = zhaoxin_pmu_handle_irq,
+ .disable_all = zhaoxin_pmu_disable_all,
+ .enable_all = zhaoxin_pmu_enable_all,
+ .enable = zhaoxin_pmu_enable_event,
+ .disable = zhaoxin_pmu_disable_event,
+ .hw_config = x86_pmu_hw_config,
+ .schedule_events = x86_schedule_events,
+ .eventsel = MSR_ARCH_PERFMON_EVENTSEL0,
+ .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
+ .event_map = zhaoxin_pmu_event_map,
+ .max_events = ARRAY_SIZE(zx_pmon_event_map),
+ .apic = 1,
+ /*
+ * For zxd/zxe, read/write operation for PMCx MSR is 48 bits.
+ */
+ .max_period = (1ULL << 47) - 1,
+ .get_event_constraints = zhaoxin_get_event_constraints,
+
+ .format_attrs = zx_arch_formats_attr,
+ .events_sysfs_show = zhaoxin_event_sysfs_show,
+};
+
+static const struct { int id; char *name; } zx_arch_events_map[]
__initconst = {
+ { PERF_COUNT_HW_CPU_CYCLES, "cpu cycles" },
+ { PERF_COUNT_HW_INSTRUCTIONS, "instructions" },
+ { PERF_COUNT_HW_BUS_CYCLES, "bus cycles" },
+ { PERF_COUNT_HW_CACHE_REFERENCES, "cache references" },
+ { PERF_COUNT_HW_CACHE_MISSES, "cache misses" },
+ { PERF_COUNT_HW_BRANCH_INSTRUCTIONS, "branch instructions" },
+ { PERF_COUNT_HW_BRANCH_MISSES, "branch misses" },
+};
+
+static __init void zhaoxin_arch_events_quirk(void)
+{
+ int bit;
+
+ /* disable event that reported as not presend by cpuid */
+ for_each_set_bit(bit, x86_pmu.events_mask,
ARRAY_SIZE(zx_arch_events_map)) {
+ zx_pmon_event_map[zx_arch_events_map[bit].id] = 0;
+ pr_warn("CPUID marked event: \'%s\' unavailable\n",
+ zx_arch_events_map[bit].name);
+ }
+}
+
+__init int zhaoxin_pmu_init(void)
+{
+ union cpuid10_edx edx;
+ union cpuid10_eax eax;
+ union cpuid10_ebx ebx;
+ struct event_constraint *c;
+ unsigned int unused;
+ int version;
+
+ pr_info("Welcome to pmu!\n");
+
+ /*
+ * Check whether the Architectural PerfMon supports
+ * hw_event or not.
+ */
+ cpuid(10, &eax.full, &ebx.full, &unused, &edx.full);
+
+ if (eax.split.mask_length < ARCH_PERFMON_EVENTS_COUNT - 1)
+ return -ENODEV;
+
+ version = eax.split.version_id;
+ if (version != 2)
+ return -ENODEV;
+
+ x86_pmu = zhaoxin_pmu;
+ pr_info("Version check pass!\n");
+
+ x86_pmu.version = version;
+ x86_pmu.num_counters = eax.split.num_counters;
+ x86_pmu.cntval_bits = eax.split.bit_width;
+ x86_pmu.cntval_mask = (1ULL << eax.split.bit_width) - 1;
+ x86_pmu.events_maskl = ebx.full;
+ x86_pmu.events_mask_len = eax.split.mask_length;
+
+ x86_pmu.num_counters_fixed = edx.split.num_counters_fixed;
+ x86_add_quirk(zhaoxin_arch_events_quirk);
+
+ switch (boot_cpu_data.x86) {
+ case 0x06:
+ if (boot_cpu_data.x86_model == 0x0f || boot_cpu_data.x86_model == 0x19) {
+
+ x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
+
+ /* Clearing status works only if the global control is enable on zxc. */
+ x86_pmu.enabled_ack = 1;
+
+ x86_pmu.event_constraints = zxc_event_constraints;
+ zx_pmon_event_map[PERF_COUNT_HW_INSTRUCTIONS] = 0;
+ zx_pmon_event_map[PERF_COUNT_HW_CACHE_REFERENCES] = 0;
+ zx_pmon_event_map[PERF_COUNT_HW_CACHE_MISSES] = 0;
+ zx_pmon_event_map[PERF_COUNT_HW_BUS_CYCLES] = 0;
+
+ pr_cont("C events, ");
+ break;
+ }
+ return -ENODEV;
+
+ case 0x07:
+ zx_pmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] =
+ X86_CONFIG(.event = 0x01, .umask = 0x01, .inv = 0x01, .cmask = 0x01);
+
+ zx_pmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] =
+ X86_CONFIG(.event = 0x0f, .umask = 0x04, .inv = 0, .cmask = 0);
+
+ switch (boot_cpu_data.x86_model) {
+ case 0x1b:
+ memcpy(hw_cache_event_ids, zxd_hw_cache_event_ids,
+ sizeof(hw_cache_event_ids));
+
+ x86_pmu.event_constraints = zxd_event_constraints;
+
+ zx_pmon_event_map[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x0700;
+ zx_pmon_event_map[PERF_COUNT_HW_BRANCH_MISSES] = 0x0709;
+
+ pr_cont("D events, ");
+ break;
+ case 0x3b:
+ memcpy(hw_cache_event_ids, zxe_hw_cache_event_ids,
+ sizeof(hw_cache_event_ids));
+
+ x86_pmu.event_constraints = zxd_event_constraints;
+
+ zx_pmon_event_map[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x0028;
+ zx_pmon_event_map[PERF_COUNT_HW_BRANCH_MISSES] = 0x0029;
+
+ pr_cont("E events, ");
+ break;
+ default:
+ return -ENODEV;
+ }
+ break;
+
+ default:
+ return -ENODEV;
+ }
+
+ x86_pmu.intel_ctrl = (1 << (x86_pmu.num_counters)) - 1;
+ x86_pmu.intel_ctrl |= ((1LL << x86_pmu.num_counters_fixed)-1) <<
INTEL_PMC_IDX_FIXED;
+
+ if (x86_pmu.event_constraints) {
+ for_each_event_constraint(c, x86_pmu.event_constraints) {
+ c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
+ c->weight += x86_pmu.num_counters;
+ }
+ }
+
+ return 0;
+}
diff --git a/arch/x86/events/zhaoxin/uncore.c
b/arch/x86/events/zhaoxin/uncore.c
new file mode 100644
index 000000000000..4c4ea01d23c8
--- /dev/null
+++ b/arch/x86/events/zhaoxin/uncore.c
@@ -0,0 +1,1101 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/module.h>
+
+#include <asm/cpu_device_id.h>
+#include "uncore.h"
+
+static struct zhaoxin_uncore_type *empty_uncore[] = { NULL, };
+static struct zhaoxin_uncore_type **uncore_msr_uncores = empty_uncore;
+
+/* mask of cpus that collect uncore events */
+static cpumask_t uncore_cpu_mask;
+
+/* constraint for the fixed counter */
+static struct event_constraint uncore_constraint_fixed =
+ EVENT_CONSTRAINT(~0ULL, 1 << UNCORE_PMC_IDX_FIXED, ~0ULL);
+
+static int max_packages;
+
+/* CHX event control */
+#define CHX_UNC_CTL_EV_SEL_MASK 0x000000ff
+#define CHX_UNC_CTL_UMASK_MASK 0x0000ff00
+#define CHX_UNC_CTL_EDGE_DET (1 << 18)
+#define CHX_UNC_CTL_EN (1 << 22)
+#define CHX_UNC_CTL_INVERT (1 << 23)
+#define CHX_UNC_CTL_CMASK_MASK 0xff000000
+#define CHX_UNC_FIXED_CTR_CTL_EN (1 << 0)
+
+#define CHX_UNC_RAW_EVENT_MASK (CHX_UNC_CTL_EV_SEL_MASK | \
+ CHX_UNC_CTL_UMASK_MASK | \
+ CHX_UNC_CTL_EDGE_DET | \
+ CHX_UNC_CTL_INVERT | \
+ CHX_UNC_CTL_CMASK_MASK)
+
+/* CHX global control register */
+#define CHX_UNC_PERF_GLOBAL_CTL 0x391
+#define CHX_UNC_FIXED_CTR 0x394
+#define CHX_UNC_FIXED_CTR_CTRL 0x395
+
+/* CHX uncore global control */
+#define CHX_UNC_GLOBAL_CTL_EN_PC_ALL ((1ULL << 4) - 1)
+#define CHX_UNC_GLOBAL_CTL_EN_FC (1ULL << 32)
+
+/* CHX uncore register */
+#define CHX_UNC_PERFEVTSEL0 0x3c0
+#define CHX_UNC_UNCORE_PMC0 0x3b0
+
+DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
+DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15");
+DEFINE_UNCORE_FORMAT_ATTR(edge, edge, "config:18");
+DEFINE_UNCORE_FORMAT_ATTR(inv, inv, "config:23");
+DEFINE_UNCORE_FORMAT_ATTR(cmask8, cmask, "config:24-31");
+
+ssize_t zx_uncore_event_show(struct kobject *kobj, struct
kobj_attribute *attr, char *buf)
+{
+ struct uncore_event_desc *event =
+ container_of(attr, struct uncore_event_desc, attr);
+ return sprintf(buf, "%s", event->config);
+}
+
+/*chx uncore support */
+static void chx_uncore_msr_disable_event(struct zhaoxin_uncore_box
*box, struct perf_event *event)
+{
+ wrmsrl(event->hw.config_base, 0);
+}
+
+static u64 uncore_msr_read_counter(struct zhaoxin_uncore_box *box,
struct perf_event *event)
+{
+ u64 count;
+
+ rdmsrl(event->hw.event_base, count);
+
+ return count;
+}
+
+static void chx_uncore_msr_disable_box(struct zhaoxin_uncore_box *box)
+{
+ wrmsrl(CHX_UNC_PERF_GLOBAL_CTL, 0);
+}
+
+static void chx_uncore_msr_enable_box(struct zhaoxin_uncore_box *box)
+{
+ wrmsrl(CHX_UNC_PERF_GLOBAL_CTL, CHX_UNC_GLOBAL_CTL_EN_PC_ALL |
CHX_UNC_GLOBAL_CTL_EN_FC);
+}
+
+static void chx_uncore_msr_enable_event(struct zhaoxin_uncore_box *box,
struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (hwc->idx < UNCORE_PMC_IDX_FIXED)
+ wrmsrl(hwc->config_base, hwc->config | CHX_UNC_CTL_EN);
+ else
+ wrmsrl(hwc->config_base, CHX_UNC_FIXED_CTR_CTL_EN);
+}
+
+static struct attribute *chx_uncore_formats_attr[] = {
+ &format_attr_event.attr,
+ &format_attr_umask.attr,
+ &format_attr_edge.attr,
+ &format_attr_inv.attr,
+ &format_attr_cmask8.attr,
+ NULL,
+};
+
+static struct attribute_group chx_uncore_format_group = {
+ .name = "format",
+ .attrs = chx_uncore_formats_attr,
+};
+
+static struct uncore_event_desc chx_uncore_events[] = {
+ { /* end: all zeroes */ },
+};
+
+static struct zhaoxin_uncore_ops chx_uncore_msr_ops = {
+ .disable_box = chx_uncore_msr_disable_box,
+ .enable_box = chx_uncore_msr_enable_box,
+ .disable_event = chx_uncore_msr_disable_event,
+ .enable_event = chx_uncore_msr_enable_event,
+ .read_counter = uncore_msr_read_counter,
+};
+
+static struct zhaoxin_uncore_type chx_uncore_box = {
+ .name = "",
+ .num_counters = 4,
+ .num_boxes = 1,
+ .perf_ctr_bits = 48,
+ .fixed_ctr_bits = 48,
+ .event_ctl = CHX_UNC_PERFEVTSEL0,
+ .perf_ctr = CHX_UNC_UNCORE_PMC0,
+ .fixed_ctr = CHX_UNC_FIXED_CTR,
+ .fixed_ctl = CHX_UNC_FIXED_CTR_CTRL,
+ .event_mask = CHX_UNC_RAW_EVENT_MASK,
+ .event_descs = chx_uncore_events,
+ .ops = &chx_uncore_msr_ops,
+ .format_group = &chx_uncore_format_group,
+};
+
+static struct zhaoxin_uncore_type *chx_msr_uncores[] = {
+ &chx_uncore_box,
+ NULL,
+};
+
+static struct zhaoxin_uncore_box *uncore_pmu_to_box(struct
zhaoxin_uncore_pmu *pmu, int cpu)
+{
+ unsigned int package_id = topology_logical_package_id(cpu);
+
+ /*
+ * The unsigned check also catches the '-1' return value for non
+ * existent mappings in the topology map.
+ */
+ return package_id < max_packages ? pmu->boxes[package_id] : NULL;
+}
+
+static void uncore_assign_hw_event(struct zhaoxin_uncore_box *box,
+ struct perf_event *event, int idx)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ hwc->idx = idx;
+ hwc->last_tag = ++box->tags[idx];
+
+ if (uncore_pmc_fixed(hwc->idx)) {
+ hwc->event_base = uncore_fixed_ctr(box);
+ hwc->config_base = uncore_fixed_ctl(box);
+ return;
+ }
+
+ hwc->config_base = uncore_event_ctl(box, hwc->idx);
+ hwc->event_base = uncore_perf_ctr(box, hwc->idx);
+}
+
+void uncore_perf_event_update(struct zhaoxin_uncore_box *box, struct
perf_event *event)
+{
+ u64 prev_count, new_count, delta;
+ int shift;
+
+ if (uncore_pmc_fixed(event->hw.idx))
+ shift = 64 - uncore_fixed_ctr_bits(box);
+ else
+ shift = 64 - uncore_perf_ctr_bits(box);
+
+ /* the hrtimer might modify the previous event value */
+again:
+ prev_count = local64_read(&event->hw.prev_count);
+ new_count = uncore_read_counter(box, event);
+ if (local64_xchg(&event->hw.prev_count, new_count) != prev_count)
+ goto again;
+
+ delta = (new_count << shift) - (prev_count << shift);
+ delta >>= shift;
+
+ local64_add(delta, &event->count);
+}
+
+static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer)
+{
+ struct zhaoxin_uncore_box *box;
+ struct perf_event *event;
+ unsigned long flags;
+ int bit;
+
+ box = container_of(hrtimer, struct zhaoxin_uncore_box, hrtimer);
+ if (!box->n_active || box->cpu != smp_processor_id())
+ return HRTIMER_NORESTART;
+ /*
+ * disable local interrupt to prevent uncore_pmu_event_start/stop
+ * to interrupt the update process
+ */
+ local_irq_save(flags);
+
+ /*
+ * handle boxes with an active event list as opposed to active
+ * counters
+ */
+ list_for_each_entry(event, &box->active_list, active_entry) {
+ uncore_perf_event_update(box, event);
+ }
+
+ for_each_set_bit(bit, box->active_mask, UNCORE_PMC_IDX_MAX)
+ uncore_perf_event_update(box, box->events[bit]);
+
+ local_irq_restore(flags);
+
+ hrtimer_forward_now(hrtimer, ns_to_ktime(box->hrtimer_duration));
+ return HRTIMER_RESTART;
+}
+
+static void uncore_pmu_start_hrtimer(struct zhaoxin_uncore_box *box)
+{
+ hrtimer_start(&box->hrtimer, ns_to_ktime(box->hrtimer_duration),
+ HRTIMER_MODE_REL_PINNED);
+}
+
+static void uncore_pmu_cancel_hrtimer(struct zhaoxin_uncore_box *box)
+{
+ hrtimer_cancel(&box->hrtimer);
+}
+
+static void uncore_pmu_init_hrtimer(struct zhaoxin_uncore_box *box)
+{
+ hrtimer_init(&box->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ box->hrtimer.function = uncore_pmu_hrtimer;
+}
+
+static struct zhaoxin_uncore_box *uncore_alloc_box(struct
zhaoxin_uncore_type *type,
+ int node)
+{
+ int i, size, numshared = type->num_shared_regs;
+ struct zhaoxin_uncore_box *box;
+
+ size = sizeof(*box) + numshared * sizeof(struct zhaoxin_uncore_extra_reg);
+
+ box = kzalloc_node(size, GFP_KERNEL, node);
+ if (!box)
+ return NULL;
+
+ for (i = 0; i < numshared; i++)
+ raw_spin_lock_init(&box->shared_regs[i].lock);
+
+ uncore_pmu_init_hrtimer(box);
+ box->cpu = -1;
+ box->package_id = -1;
+
+ /* set default hrtimer timeout */
+ box->hrtimer_duration = UNCORE_PMU_HRTIMER_INTERVAL;
+
+ INIT_LIST_HEAD(&box->active_list);
+
+ return box;
+}
+
+static bool is_box_event(struct zhaoxin_uncore_box *box, struct
perf_event *event)
+{
+ return &box->pmu->pmu == event->pmu;
+}
+
+static struct event_constraint *
+uncore_get_event_constraint(struct zhaoxin_uncore_box *box, struct
perf_event *event)
+{
+ struct zhaoxin_uncore_type *type = box->pmu->type;
+ struct event_constraint *c;
+
+ if (type->ops->get_constraint) {
+ c = type->ops->get_constraint(box, event);
+ if (c)
+ return c;
+ }
+
+ if (event->attr.config == UNCORE_FIXED_EVENT)
+ return &uncore_constraint_fixed;
+
+ if (type->constraints) {
+ for_each_event_constraint(c, type->constraints) {
+ if ((event->hw.config & c->cmask) == c->code)
+ return c;
+ }
+ }
+
+ return &type->unconstrainted;
+}
+
+static void uncore_put_event_constraint(struct zhaoxin_uncore_box *box,
+ struct perf_event *event)
+{
+ if (box->pmu->type->ops->put_constraint)
+ box->pmu->type->ops->put_constraint(box, event);
+}
+
+static int uncore_assign_events(struct zhaoxin_uncore_box *box, int
assign[], int n)
+{
+ unsigned long used_mask[BITS_TO_LONGS(UNCORE_PMC_IDX_MAX)];
+ struct event_constraint *c;
+ int i, wmin, wmax, ret = 0;
+ struct hw_perf_event *hwc;
+
+ bitmap_zero(used_mask, UNCORE_PMC_IDX_MAX);
+
+ for (i = 0, wmin = UNCORE_PMC_IDX_MAX, wmax = 0; i < n; i++) {
+ c = uncore_get_event_constraint(box, box->event_list[i]);
+ box->event_constraint[i] = c;
+ wmin = min(wmin, c->weight);
+ wmax = max(wmax, c->weight);
+ }
+
+ /* fastpath, try to reuse previous register */
+ for (i = 0; i < n; i++) {
+ hwc = &box->event_list[i]->hw;
+ c = box->event_constraint[i];
+
+ /* never assigned */
+ if (hwc->idx == -1)
+ break;
+
+ /* constraint still honored */
+ if (!test_bit(hwc->idx, c->idxmsk))
+ break;
+
+ /* not already used */
+ if (test_bit(hwc->idx, used_mask))
+ break;
+
+ __set_bit(hwc->idx, used_mask);
+ if (assign)
+ assign[i] = hwc->idx;
+ }
+ /* slow path */
+ if (i != n)
+ ret = perf_assign_events(box->event_constraint, n,
+ wmin, wmax, n, assign);
+
+ if (!assign || ret) {
+ for (i = 0; i < n; i++)
+ uncore_put_event_constraint(box, box->event_list[i]);
+ }
+ return ret ? -EINVAL : 0;
+}
+
+static void uncore_pmu_event_start(struct perf_event *event, int flags)
+{
+ struct zhaoxin_uncore_box *box = uncore_event_to_box(event);
+ int idx = event->hw.idx;
+
+
+ if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX))
+ return;
+
+ if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
+ return;
+
+ event->hw.state = 0;
+ box->events[idx] = event;
+ box->n_active++;
+ __set_bit(idx, box->active_mask);
+
+ local64_set(&event->hw.prev_count, uncore_read_counter(box, event));
+ uncore_enable_event(box, event);
+
+ if (box->n_active == 1) {
+ uncore_enable_box(box);
+ uncore_pmu_start_hrtimer(box);
+ }
+}
+
+static void uncore_pmu_event_stop(struct perf_event *event, int flags)
+{
+ struct zhaoxin_uncore_box *box = uncore_event_to_box(event);
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (__test_and_clear_bit(hwc->idx, box->active_mask)) {
+ uncore_disable_event(box, event);
+ box->n_active--;
+ box->events[hwc->idx] = NULL;
+ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ hwc->state |= PERF_HES_STOPPED;
+
+ if (box->n_active == 0) {
+ uncore_disable_box(box);
+ uncore_pmu_cancel_hrtimer(box);
+ }
+ }
+
+ if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
+ /*
+ * Drain the remaining delta count out of a event
+ * that we are disabling:
+ */
+ uncore_perf_event_update(box, event);
+ hwc->state |= PERF_HES_UPTODATE;
+ }
+}
+
+static int
+uncore_collect_events(struct zhaoxin_uncore_box *box, struct perf_event
*leader,
+ bool dogrp)
+{
+ struct perf_event *event;
+ int n, max_count;
+
+ max_count = box->pmu->type->num_counters;
+ if (box->pmu->type->fixed_ctl)
+ max_count++;
+
+ if (box->n_events >= max_count)
+ return -EINVAL;
+
+ n = box->n_events;
+
+ if (is_box_event(box, leader)) {
+ box->event_list[n] = leader;
+ n++;
+ }
+
+ if (!dogrp)
+ return n;
+
+ for_each_sibling_event(event, leader) {
+ if (!is_box_event(box, event) ||
+ event->state <= PERF_EVENT_STATE_OFF)
+ continue;
+
+ if (n >= max_count)
+ return -EINVAL;
+
+ box->event_list[n] = event;
+ n++;
+ }
+ return n;
+}
+
+static int uncore_pmu_event_add(struct perf_event *event, int flags)
+{
+ struct zhaoxin_uncore_box *box = uncore_event_to_box(event);
+ struct hw_perf_event *hwc = &event->hw;
+ int assign[UNCORE_PMC_IDX_MAX];
+ int i, n, ret;
+
+ if (!box)
+ return -ENODEV;
+
+ ret = n = uncore_collect_events(box, event, false);
+ if (ret < 0)
+ return ret;
+
+ hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ if (!(flags & PERF_EF_START))
+ hwc->state |= PERF_HES_ARCH;
+
+ ret = uncore_assign_events(box, assign, n);
+ if (ret)
+ return ret;
+
+ /* save events moving to new counters */
+ for (i = 0; i < box->n_events; i++) {
+ event = box->event_list[i];
+ hwc = &event->hw;
+
+ if (hwc->idx == assign[i] &&
+ hwc->last_tag == box->tags[assign[i]])
+ continue;
+ /*
+ * Ensure we don't accidentally enable a stopped
+ * counter simply because we rescheduled.
+ */
+ if (hwc->state & PERF_HES_STOPPED)
+ hwc->state |= PERF_HES_ARCH;
+
+ uncore_pmu_event_stop(event, PERF_EF_UPDATE);
+ }
+
+ /* reprogram moved events into new counters */
+ for (i = 0; i < n; i++) {
+ event = box->event_list[i];
+ hwc = &event->hw;
+
+ if (hwc->idx != assign[i] ||
+ hwc->last_tag != box->tags[assign[i]])
+ uncore_assign_hw_event(box, event, assign[i]);
+ else if (i < box->n_events)
+ continue;
+
+ if (hwc->state & PERF_HES_ARCH)
+ continue;
+
+ uncore_pmu_event_start(event, 0);
+ }
+ box->n_events = n;
+
+ return 0;
+}
+
+static int uncore_validate_group(struct zhaoxin_uncore_pmu *pmu,
+ struct perf_event *event)
+{
+ struct perf_event *leader = event->group_leader;
+ struct zhaoxin_uncore_box *fake_box;
+ int ret = -EINVAL, n;
+
+ fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE);
+ if (!fake_box)
+ return -ENOMEM;
+
+ fake_box->pmu = pmu;
+ /*
+ * the event is not yet connected with its
+ * siblings therefore we must first collect
+ * existing siblings, then add the new event
+ * before we can simulate the scheduling
+ */
+ n = uncore_collect_events(fake_box, leader, true);
+ if (n < 0)
+ goto out;
+
+ fake_box->n_events = n;
+ n = uncore_collect_events(fake_box, event, false);
+ if (n < 0)
+ goto out;
+
+ fake_box->n_events = n;
+
+ ret = uncore_assign_events(fake_box, NULL, n);
+out:
+ kfree(fake_box);
+ return ret;
+}
+
+static void uncore_pmu_event_del(struct perf_event *event, int flags)
+{
+ struct zhaoxin_uncore_box *box = uncore_event_to_box(event);
+ int i;
+
+ uncore_pmu_event_stop(event, PERF_EF_UPDATE);
+
+ for (i = 0; i < box->n_events; i++) {
+ if (event == box->event_list[i]) {
+ uncore_put_event_constraint(box, event);
+
+ for (++i; i < box->n_events; i++)
+ box->event_list[i - 1] = box->event_list[i];
+
+ --box->n_events;
+ break;
+ }
+ }
+
+ event->hw.idx = -1;
+ event->hw.last_tag = ~0ULL;
+}
+
+static void uncore_pmu_event_read(struct perf_event *event)
+{
+ struct zhaoxin_uncore_box *box = uncore_event_to_box(event);
+
+ uncore_perf_event_update(box, event);
+}
+
+static int uncore_pmu_event_init(struct perf_event *event)
+{
+ struct zhaoxin_uncore_pmu *pmu;
+ struct zhaoxin_uncore_box *box;
+ struct hw_perf_event *hwc = &event->hw;
+ int ret;
+
+ if (event->attr.type != event->pmu->type)
+ return -ENOENT;
+
+ pmu = uncore_event_to_pmu(event);
+ /* no device found for this pmu */
+ if (pmu->func_id < 0)
+ return -ENOENT;
+
+ /* Sampling not supported yet */
+ if (hwc->sample_period)
+ return -EINVAL;
+
+ /*
+ * Place all uncore events for a particular physical package
+ * onto a single cpu
+ */
+ if (event->cpu < 0)
+ return -EINVAL;
+ box = uncore_pmu_to_box(pmu, event->cpu);
+ if (!box || box->cpu < 0)
+ return -EINVAL;
+ event->cpu = box->cpu;
+ event->pmu_private = box;
+
+ event->event_caps |= PERF_EV_CAP_READ_ACTIVE_PKG;
+
+ event->hw.idx = -1;
+ event->hw.last_tag = ~0ULL;
+ event->hw.extra_reg.idx = EXTRA_REG_NONE;
+ event->hw.branch_reg.idx = EXTRA_REG_NONE;
+
+ if (event->attr.config == UNCORE_FIXED_EVENT) {
+ /* no fixed counter */
+ if (!pmu->type->fixed_ctl)
+ return -EINVAL;
+ /*
+ * if there is only one fixed counter, only the first pmu
+ * can access the fixed counter
+ */
+ if (pmu->type->single_fixed && pmu->pmu_idx > 0)
+ return -EINVAL;
+
+ /* fixed counters have event field hardcoded to zero */
+ hwc->config = 0ULL;
+ } else {
+ hwc->config = event->attr.config &
+ (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32));
+ if (pmu->type->ops->hw_config) {
+ ret = pmu->type->ops->hw_config(box, event);
+ if (ret)
+ return ret;
+ }
+ }
+
+ if (event->group_leader != event)
+ ret = uncore_validate_group(pmu, event);
+ else
+ ret = 0;
+
+ return ret;
+}
+
+static ssize_t uncore_get_attr_cpumask(struct device *dev, struct
device_attribute *attr, char *buf)
+{
+ return cpumap_print_to_pagebuf(true, buf, &uncore_cpu_mask);
+}
+
+static DEVICE_ATTR(cpumask, S_IRUGO, uncore_get_attr_cpumask, NULL);
+
+static struct attribute *uncore_pmu_attrs[] = {
+ &dev_attr_cpumask.attr,
+ NULL,
+};
+
+static const struct attribute_group uncore_pmu_attr_group = {
+ .attrs = uncore_pmu_attrs,
+};
+
+static void uncore_pmu_unregister(struct zhaoxin_uncore_pmu *pmu)
+{
+ if (!pmu->registered)
+ return;
+ perf_pmu_unregister(&pmu->pmu);
+ pmu->registered = false;
+}
+
+static void uncore_free_boxes(struct zhaoxin_uncore_pmu *pmu)
+{
+ int package;
+
+ for (package = 0; package < max_packages; package++)
+ kfree(pmu->boxes[package]);
+ kfree(pmu->boxes);
+}
+
+static void uncore_type_exit(struct zhaoxin_uncore_type *type)
+{
+ struct zhaoxin_uncore_pmu *pmu = type->pmus;
+ int i;
+
+ if (pmu) {
+ for (i = 0; i < type->num_boxes; i++, pmu++) {
+ uncore_pmu_unregister(pmu);
+ uncore_free_boxes(pmu);
+ }
+ kfree(type->pmus);
+ type->pmus = NULL;
+ }
+ kfree(type->events_group);
+ type->events_group = NULL;
+}
+
+static void uncore_types_exit(struct zhaoxin_uncore_type **types)
+{
+ for (; *types; types++)
+ uncore_type_exit(*types);
+}
+
+static int __init uncore_type_init(struct zhaoxin_uncore_type *type,
bool setid)
+{
+ struct zhaoxin_uncore_pmu *pmus;
+ size_t size;
+ int i, j;
+
+ pmus = kcalloc(type->num_boxes, sizeof(*pmus), GFP_KERNEL);
+ if (!pmus)
+ return -ENOMEM;
+
+ size = max_packages*sizeof(struct zhaoxin_uncore_box *);
+
+ for (i = 0; i < type->num_boxes; i++) {
+ pmus[i].func_id = setid ? i : -1;
+ pmus[i].pmu_idx = i;
+ pmus[i].type = type;
+ pmus[i].boxes = kzalloc(size, GFP_KERNEL);
+ if (!pmus[i].boxes)
+ goto err;
+ }
+
+ type->pmus = pmus;
+ type->unconstrainted = (struct event_constraint)
+ __EVENT_CONSTRAINT(0, (1ULL << type->num_counters) - 1,
+ 0, type->num_counters, 0, 0);
+
+ if (type->event_descs) {
+ struct {
+ struct attribute_group group;
+ struct attribute *attrs[];
+ } *attr_group;
+ for (i = 0; type->event_descs[i].attr.attr.name; i++)
+ ;
+
+ attr_group = kzalloc(struct_size(attr_group, attrs, i + 1), GFP_KERNEL);
+ if (!attr_group)
+ goto err;
+
+ attr_group->group.name = "events";
+ attr_group->group.attrs = attr_group->attrs;
+
+ for (j = 0; j < i; j++)
+ attr_group->attrs[j] = &type->event_descs[j].attr.attr;
+
+ type->events_group = &attr_group->group;
+ }
+
+ type->pmu_group = &uncore_pmu_attr_group;
+
+ return 0;
+
+err:
+ for (i = 0; i < type->num_boxes; i++)
+ kfree(pmus[i].boxes);
+ kfree(pmus);
+
+ return -ENOMEM;
+}
+
+static int __init
+uncore_types_init(struct zhaoxin_uncore_type **types, bool setid)
+{
+ int ret;
+
+ for (; *types; types++) {
+ ret = uncore_type_init(*types, setid);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static void uncore_change_type_ctx(struct zhaoxin_uncore_type *type,
int old_cpu,
+ int new_cpu)
+{
+ struct zhaoxin_uncore_pmu *pmu = type->pmus;
+ struct zhaoxin_uncore_box *box;
+ int i, package;
+
+ package = topology_logical_package_id(old_cpu < 0 ? new_cpu : old_cpu);
+ for (i = 0; i < type->num_boxes; i++, pmu++) {
+ box = pmu->boxes[package];
+ if (!box)
+ continue;
+
+ if (old_cpu < 0) {
+ WARN_ON_ONCE(box->cpu != -1);
+ box->cpu = new_cpu;
+ continue;
+ }
+
+ WARN_ON_ONCE(box->cpu != old_cpu);
+ box->cpu = -1;
+ if (new_cpu < 0)
+ continue;
+
+ uncore_pmu_cancel_hrtimer(box);
+ perf_pmu_migrate_context(&pmu->pmu, old_cpu, new_cpu);
+ box->cpu = new_cpu;
+ }
+}
+
+static void uncore_change_context(struct zhaoxin_uncore_type **uncores,
+ int old_cpu, int new_cpu)
+{
+ for (; *uncores; uncores++)
+ uncore_change_type_ctx(*uncores, old_cpu, new_cpu);
+}
+
+static void uncore_box_unref(struct zhaoxin_uncore_type **types, int id)
+{
+ struct zhaoxin_uncore_type *type;
+ struct zhaoxin_uncore_pmu *pmu;
+ struct zhaoxin_uncore_box *box;
+ int i;
+
+ for (; *types; types++) {
+ type = *types;
+ pmu = type->pmus;
+ for (i = 0; i < type->num_boxes; i++, pmu++) {
+ box = pmu->boxes[id];
+ if (box && atomic_dec_return(&box->refcnt) == 0)
+ uncore_box_exit(box);
+ }
+ }
+}
+
+static int uncore_event_cpu_offline(unsigned int cpu)
+{
+ int package, target;
+
+ /* Check if exiting cpu is used for collecting uncore events */
+ if (!cpumask_test_and_clear_cpu(cpu, &uncore_cpu_mask))
+ goto unref;
+ /* Find a new cpu to collect uncore events */
+ target = cpumask_any_but(topology_core_cpumask(cpu), cpu);
+
+ /* Migrate uncore events to the new target */
+ if (target < nr_cpu_ids)
+ cpumask_set_cpu(target, &uncore_cpu_mask);
+ else
+ target = -1;
+
+ uncore_change_context(uncore_msr_uncores, cpu, target);
+
+unref:
+ /* Clear the references */
+ package = topology_logical_package_id(cpu);
+ uncore_box_unref(uncore_msr_uncores, package);
+ return 0;
+}
+
+static int allocate_boxes(struct zhaoxin_uncore_type **types,
+ unsigned int package, unsigned int cpu)
+{
+ struct zhaoxin_uncore_box *box, *tmp;
+ struct zhaoxin_uncore_type *type;
+ struct zhaoxin_uncore_pmu *pmu;
+ LIST_HEAD(allocated);
+ int i;
+
+ /* Try to allocate all required boxes */
+ for (; *types; types++) {
+ type = *types;
+ pmu = type->pmus;
+ for (i = 0; i < type->num_boxes; i++, pmu++) {
+ if (pmu->boxes[package])
+ continue;
+ box = uncore_alloc_box(type, cpu_to_node(cpu));
+ if (!box)
+ goto cleanup;
+ box->pmu = pmu;
+ box->package_id = package;
+ list_add(&box->active_list, &allocated);
+ }
+ }
+ /* Install them in the pmus */
+ list_for_each_entry_safe(box, tmp, &allocated, active_list) {
+ list_del_init(&box->active_list);
+ box->pmu->boxes[package] = box;
+ }
+ return 0;
+
+cleanup:
+ list_for_each_entry_safe(box, tmp, &allocated, active_list) {
+ list_del_init(&box->active_list);
+ kfree(box);
+ }
+ return -ENOMEM;
+}
+
+static int uncore_box_ref(struct zhaoxin_uncore_type **types,
+ int id, unsigned int cpu)
+{
+ struct zhaoxin_uncore_type *type;
+ struct zhaoxin_uncore_pmu *pmu;
+ struct zhaoxin_uncore_box *box;
+ int i, ret;
+
+ ret = allocate_boxes(types, id, cpu);
+ if (ret)
+ return ret;
+
+ for (; *types; types++) {
+ type = *types;
+ pmu = type->pmus;
+ for (i = 0; i < type->num_boxes; i++, pmu++) {
+ box = pmu->boxes[id];
+ if (box && atomic_inc_return(&box->refcnt) == 1)
+ uncore_box_init(box);
+ }
+ }
+ return 0;
+}
+
+static int uncore_event_cpu_online(unsigned int cpu)
+{
+ int package, target, msr_ret;
+
+ package = topology_logical_package_id(cpu);
+ msr_ret = uncore_box_ref(uncore_msr_uncores, package, cpu);
+
+ if (msr_ret)
+ return -ENOMEM;
+
+ /*
+ * Check if there is an online cpu in the package
+ * which collects uncore events already.
+ */
+ target = cpumask_any_and(&uncore_cpu_mask, topology_core_cpumask(cpu));
+ if (target < nr_cpu_ids)
+ return 0;
+
+ cpumask_set_cpu(cpu, &uncore_cpu_mask);
+
+ if (!msr_ret)
+ uncore_change_context(uncore_msr_uncores, -1, cpu);
+
+ return 0;
+}
+
+static int uncore_pmu_register(struct zhaoxin_uncore_pmu *pmu)
+{
+ int ret;
+
+ if (!pmu->type->pmu) {
+ pmu->pmu = (struct pmu) {
+ .attr_groups = pmu->type->attr_groups,
+ .task_ctx_nr = perf_invalid_context,
+ .event_init = uncore_pmu_event_init,
+ .add = uncore_pmu_event_add,
+ .del = uncore_pmu_event_del,
+ .start = uncore_pmu_event_start,
+ .stop = uncore_pmu_event_stop,
+ .read = uncore_pmu_event_read,
+ .module = THIS_MODULE,
+ };
+ } else {
+ pmu->pmu = *pmu->type->pmu;
+ pmu->pmu.attr_groups = pmu->type->attr_groups;
+ }
+
+ if (pmu->type->num_boxes == 1) {
+ if (strlen(pmu->type->name) > 0)
+ sprintf(pmu->name, "uncore_%s", pmu->type->name);
+ else
+ sprintf(pmu->name, "uncore");
+ } else {
+ sprintf(pmu->name, "uncore_%s_%d", pmu->type->name,
+ pmu->pmu_idx);
+ }
+
+ ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
+ if (!ret)
+ pmu->registered = true;
+ return ret;
+}
+
+static int __init type_pmu_register(struct zhaoxin_uncore_type *type)
+{
+ int i, ret;
+
+ for (i = 0; i < type->num_boxes; i++) {
+ ret = uncore_pmu_register(&type->pmus[i]);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int __init uncore_msr_pmus_register(void)
+{
+ struct zhaoxin_uncore_type **types = uncore_msr_uncores;
+ int ret;
+
+ for (; *types; types++) {
+ ret = type_pmu_register(*types);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int __init uncore_cpu_init(void)
+{
+ int ret;
+
+ ret = uncore_types_init(uncore_msr_uncores, true);
+ if (ret)
+ goto err;
+
+ ret = uncore_msr_pmus_register();
+ if (ret)
+ goto err;
+ return 0;
+err:
+ uncore_types_exit(uncore_msr_uncores);
+ uncore_msr_uncores = empty_uncore;
+ return ret;
+}
+
+
+#define CENTAUR_UNCORE_MODEL_MATCH(model, init) \
+ { X86_VENDOR_CENTAUR, 7, model, X86_FEATURE_ANY, (unsigned long)&init }
+
+#define ZHAOXIN_UNCORE_MODEL_MATCH(model, init) \
+ { X86_VENDOR_ZHAOXIN, 7, model, X86_FEATURE_ANY, (unsigned long)&init }
+
+struct zhaoxin_uncore_init_fun {
+ void (*cpu_init)(void);
+};
+
+void chx_uncore_cpu_init(void)
+{
+ uncore_msr_uncores = chx_msr_uncores;
+}
+
+static const struct zhaoxin_uncore_init_fun chx_uncore_init __initconst = {
+ .cpu_init = chx_uncore_cpu_init,
+};
+
+static const struct x86_cpu_id zhaoxin_uncore_match[] __initconst = {
+ CENTAUR_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX001, chx_uncore_init),
+ CENTAUR_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX002, chx_uncore_init),
+ ZHAOXIN_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX001, chx_uncore_init),
+ ZHAOXIN_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX002, chx_uncore_init),
+ {},
+};
+
+MODULE_DEVICE_TABLE(x86cpu, zhaoxin_uncore_match);
+
+static int __init zhaoxin_uncore_init(void)
+{
+ const struct x86_cpu_id *id;
+ struct zhaoxin_uncore_init_fun *uncore_init;
+ int cret = 0, ret;
+
+ id = x86_match_cpu(zhaoxin_uncore_match);
+
+ if (!id)
+ return -ENODEV;
+
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return -ENODEV;
+
+ max_packages = topology_max_packages();
+
+ pr_info("welcome to uncore!\n");
+
+ uncore_init = (struct zhaoxin_uncore_init_fun *)id->driver_data;
+
+ if (uncore_init->cpu_init) {
+ uncore_init->cpu_init();
+ cret = uncore_cpu_init();
+ }
+
+ if (cret)
+ return -ENODEV;
+
+ ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE,
+ "perf/x86/zhaoxin/uncore:online",
+ uncore_event_cpu_online,
+ uncore_event_cpu_offline);
+ if (ret)
+ goto err;
+
+ pr_info("uncore init success!\n");
+ return 0;
+err:
+ uncore_types_exit(uncore_msr_uncores);
+ return ret;
+}
+module_init(zhaoxin_uncore_init);
+
+static void __exit zhaoxin_uncore_exit(void)
+{
+ cpuhp_remove_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE);
+ uncore_types_exit(uncore_msr_uncores);
+}
+module_exit(zhaoxin_uncore_exit);
+
+MODULE_LICENSE("GPL");
diff --git a/arch/x86/events/zhaoxin/uncore.h
b/arch/x86/events/zhaoxin/uncore.h
new file mode 100644
index 000000000000..3521123dc95d
--- /dev/null
+++ b/arch/x86/events/zhaoxin/uncore.h
@@ -0,0 +1,308 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/slab.h>
+#include <linux/pci.h>
+#include <asm/apicdef.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+
+#include <linux/perf_event.h>
+#include "../perf_event.h"
+
+#define ZHAOXIN_FAM7_CHX001 0x1b
+#define ZHAOXIN_FAM7_CHX002 0x3b
+
+#define UNCORE_PMU_NAME_LEN 32
+#define UNCORE_PMU_HRTIMER_INTERVAL (60LL * NSEC_PER_SEC)
+#define UNCORE_CHX_IMC_HRTIMER_INTERVAL (5ULL * NSEC_PER_SEC)
+
+
+#define UNCORE_FIXED_EVENT 0xff
+#define UNCORE_PMC_IDX_MAX_GENERIC 4
+#define UNCORE_PMC_IDX_MAX_FIXED 1
+#define UNCORE_PMC_IDX_FIXED UNCORE_PMC_IDX_MAX_GENERIC
+
+#define UNCORE_PMC_IDX_MAX (UNCORE_PMC_IDX_FIXED + 1)
+
+struct zhaoxin_uncore_ops;
+struct zhaoxin_uncore_pmu;
+struct zhaoxin_uncore_box;
+struct uncore_event_desc;
+
+struct zhaoxin_uncore_type {
+ const char *name;
+ int num_counters;
+ int num_boxes;
+ int perf_ctr_bits;
+ int fixed_ctr_bits;
+ unsigned perf_ctr;
+ unsigned event_ctl;
+ unsigned event_mask;
+ unsigned event_mask_ext;
+ unsigned fixed_ctr;
+ unsigned fixed_ctl;
+ unsigned box_ctl;
+ unsigned msr_offset;
+ unsigned num_shared_regs:8;
+ unsigned single_fixed:1;
+ unsigned pair_ctr_ctl:1;
+ unsigned *msr_offsets;
+ struct event_constraint unconstrainted;
+ struct event_constraint *constraints;
+ struct zhaoxin_uncore_pmu *pmus;
+ struct zhaoxin_uncore_ops *ops;
+ struct uncore_event_desc *event_descs;
+ const struct attribute_group *attr_groups[4];
+ struct pmu *pmu; /* for custom pmu ops */
+};
+
+#define pmu_group attr_groups[0]
+#define format_group attr_groups[1]
+#define events_group attr_groups[2]
+
+struct zhaoxin_uncore_ops {
+ void (*init_box)(struct zhaoxin_uncore_box *);
+ void (*exit_box)(struct zhaoxin_uncore_box *);
+ void (*disable_box)(struct zhaoxin_uncore_box *);
+ void (*enable_box)(struct zhaoxin_uncore_box *);
+ void (*disable_event)(struct zhaoxin_uncore_box *, struct perf_event *);
+ void (*enable_event)(struct zhaoxin_uncore_box *, struct perf_event *);
+ u64 (*read_counter)(struct zhaoxin_uncore_box *, struct perf_event *);
+ int (*hw_config)(struct zhaoxin_uncore_box *, struct perf_event *);
+ struct event_constraint *(*get_constraint)(struct zhaoxin_uncore_box *,
+ struct perf_event *);
+ void (*put_constraint)(struct zhaoxin_uncore_box *, struct perf_event *);
+};
+
+struct zhaoxin_uncore_pmu {
+ struct pmu pmu;
+ char name[UNCORE_PMU_NAME_LEN];
+ int pmu_idx;
+ int func_id;
+ bool registered;
+ atomic_t activeboxes;
+ struct zhaoxin_uncore_type *type;
+ struct zhaoxin_uncore_box **boxes;
+};
+
+struct zhaoxin_uncore_extra_reg {
+ raw_spinlock_t lock;
+ u64 config, config1, config2;
+ atomic_t ref;
+};
+
+struct zhaoxin_uncore_box {
+ int pci_phys_id;
+ int package_id; /*Package ID */
+ int n_active; /* number of active events */
+ int n_events;
+ int cpu; /* cpu to collect events */
+ unsigned long flags;
+ atomic_t refcnt;
+ struct perf_event *events[UNCORE_PMC_IDX_MAX];
+ struct perf_event *event_list[UNCORE_PMC_IDX_MAX];
+ struct event_constraint *event_constraint[UNCORE_PMC_IDX_MAX];
+ unsigned long active_mask[BITS_TO_LONGS(UNCORE_PMC_IDX_MAX)];
+ u64 tags[UNCORE_PMC_IDX_MAX];
+ struct pci_dev *pci_dev;
+ struct zhaoxin_uncore_pmu *pmu;
+ u64 hrtimer_duration; /* hrtimer timeout for this box */
+ struct hrtimer hrtimer;
+ struct list_head list;
+ struct list_head active_list;
+ void __iomem *io_addr;
+ struct zhaoxin_uncore_extra_reg shared_regs[0];
+};
+
+#define UNCORE_BOX_FLAG_INITIATED 0
+
+struct uncore_event_desc {
+ struct kobj_attribute attr;
+ const char *config;
+};
+
+ssize_t zx_uncore_event_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf);
+
+#define ZHAOXIN_UNCORE_EVENT_DESC(_name, _config) \
+{ \
+ .attr = __ATTR(_name, 0444, zx_uncore_event_show, NULL), \
+ .config = _config, \
+}
+
+#define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \
+static ssize_t __uncore_##_var##_show(struct kobject *kobj, \
+ struct kobj_attribute *attr, \
+ char *page) \
+{ \
+ BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
+ return sprintf(page, _format "\n"); \
+} \
+static struct kobj_attribute format_attr_##_var = \
+ __ATTR(_name, 0444, __uncore_##_var##_show, NULL)
+
+static inline bool uncore_pmc_fixed(int idx)
+{
+ return idx == UNCORE_PMC_IDX_FIXED;
+}
+
+static inline unsigned uncore_msr_box_offset(struct zhaoxin_uncore_box
*box)
+{
+ struct zhaoxin_uncore_pmu *pmu = box->pmu;
+
+ return pmu->type->msr_offsets ?
+ pmu->type->msr_offsets[pmu->pmu_idx] :
+ pmu->type->msr_offset * pmu->pmu_idx;
+}
+
+static inline unsigned uncore_msr_box_ctl(struct zhaoxin_uncore_box *box)
+{
+ if (!box->pmu->type->box_ctl)
+ return 0;
+ return box->pmu->type->box_ctl + uncore_msr_box_offset(box);
+}
+
+static inline unsigned uncore_msr_fixed_ctl(struct zhaoxin_uncore_box *box)
+{
+ if (!box->pmu->type->fixed_ctl)
+ return 0;
+ return box->pmu->type->fixed_ctl + uncore_msr_box_offset(box);
+}
+
+static inline unsigned uncore_msr_fixed_ctr(struct zhaoxin_uncore_box *box)
+{
+ return box->pmu->type->fixed_ctr + uncore_msr_box_offset(box);
+}
+
+static inline
+unsigned uncore_msr_event_ctl(struct zhaoxin_uncore_box *box, int idx)
+{
+ return box->pmu->type->event_ctl +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
+ uncore_msr_box_offset(box);
+}
+
+static inline
+unsigned uncore_msr_perf_ctr(struct zhaoxin_uncore_box *box, int idx)
+{
+ return box->pmu->type->perf_ctr +
+ (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) +
+ uncore_msr_box_offset(box);
+}
+
+static inline
+unsigned uncore_fixed_ctl(struct zhaoxin_uncore_box *box)
+{
+ return uncore_msr_fixed_ctl(box);
+}
+
+static inline
+unsigned uncore_fixed_ctr(struct zhaoxin_uncore_box *box)
+{
+ return uncore_msr_fixed_ctr(box);
+}
+
+static inline
+unsigned uncore_event_ctl(struct zhaoxin_uncore_box *box, int idx)
+{
+ return uncore_msr_event_ctl(box, idx);
+}
+
+static inline
+unsigned uncore_perf_ctr(struct zhaoxin_uncore_box *box, int idx)
+{
+ return uncore_msr_perf_ctr(box, idx);
+}
+
+static inline int uncore_perf_ctr_bits(struct zhaoxin_uncore_box *box)
+{
+ return box->pmu->type->perf_ctr_bits;
+}
+
+static inline int uncore_fixed_ctr_bits(struct zhaoxin_uncore_box *box)
+{
+ return box->pmu->type->fixed_ctr_bits;
+}
+
+static inline int uncore_num_counters(struct zhaoxin_uncore_box *box)
+{
+ return box->pmu->type->num_counters;
+}
+
+static inline void uncore_disable_box(struct zhaoxin_uncore_box *box)
+{
+ if (box->pmu->type->ops->disable_box)
+ box->pmu->type->ops->disable_box(box);
+}
+
+static inline void uncore_enable_box(struct zhaoxin_uncore_box *box)
+{
+ if (box->pmu->type->ops->enable_box)
+ box->pmu->type->ops->enable_box(box);
+}
+
+static inline void uncore_disable_event(struct zhaoxin_uncore_box *box,
+ struct perf_event *event)
+{
+ box->pmu->type->ops->disable_event(box, event);
+}
+
+static inline void uncore_enable_event(struct zhaoxin_uncore_box *box,
+ struct perf_event *event)
+{
+ box->pmu->type->ops->enable_event(box, event);
+}
+
+static inline u64 uncore_read_counter(struct zhaoxin_uncore_box *box,
+ struct perf_event *event)
+{
+ return box->pmu->type->ops->read_counter(box, event);
+}
+
+static inline void uncore_box_init(struct zhaoxin_uncore_box *box)
+{
+ if (!test_and_set_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) {
+ if (box->pmu->type->ops->init_box)
+ box->pmu->type->ops->init_box(box);
+ }
+}
+
+static inline void uncore_box_exit(struct zhaoxin_uncore_box *box)
+{
+ if (test_and_clear_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) {
+ if (box->pmu->type->ops->exit_box)
+ box->pmu->type->ops->exit_box(box);
+ }
+}
+
+static inline bool uncore_box_is_fake(struct zhaoxin_uncore_box *box)
+{
+ return (box->package_id < 0);
+}
+
+static inline struct zhaoxin_uncore_pmu *uncore_event_to_pmu(struct
perf_event *event)
+{
+ return container_of(event->pmu, struct zhaoxin_uncore_pmu, pmu);
+}
+
+static inline struct zhaoxin_uncore_box *uncore_event_to_box(struct
perf_event *event)
+{
+ return event->pmu_private;
+}
+
+
+static struct zhaoxin_uncore_box *uncore_pmu_to_box(struct
zhaoxin_uncore_pmu *pmu, int cpu);
+static u64 uncore_msr_read_counter(struct zhaoxin_uncore_box *box,
struct perf_event *event);
+
+static void uncore_pmu_start_hrtimer(struct zhaoxin_uncore_box *box);
+static void uncore_pmu_cancel_hrtimer(struct zhaoxin_uncore_box *box);
+static void uncore_pmu_event_start(struct perf_event *event, int flags);
+static void uncore_pmu_event_stop(struct perf_event *event, int flags);
+static int uncore_pmu_event_add(struct perf_event *event, int flags);
+static void uncore_pmu_event_del(struct perf_event *event, int flags);
+static void uncore_pmu_event_read(struct perf_event *event);
+static void uncore_perf_event_update(struct zhaoxin_uncore_box *box,
struct perf_event *event);
+struct event_constraint *
+uncore_get_constraint(struct zhaoxin_uncore_box *box, struct perf_event
*event);
+void uncore_put_constraint(struct zhaoxin_uncore_box *box, struct
perf_event *event);
+u64 uncore_shared_reg_config(struct zhaoxin_uncore_box *box, int idx);
+
+void chx_uncore_cpu_init(void);
diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c
b/arch/x86/kernel/cpu/perfctr-watchdog.c
index 9556930cd8c1..a548d9104604 100644
--- a/arch/x86/kernel/cpu/perfctr-watchdog.c
+++ b/arch/x86/kernel/cpu/perfctr-watchdog.c
@@ -63,6 +63,10 @@ static inline unsigned int
nmi_perfctr_msr_to_bit(unsigned int msr)
case 15:
return msr - MSR_P4_BPU_PERFCTR0;
}
+ break;
+ case X86_VENDOR_ZHAOXIN:
+ case X86_VENDOR_CENTAUR:
+ return msr - MSR_ARCH_PERFMON_PERFCTR0;
}
return 0;
}
@@ -92,6 +96,10 @@ static inline unsigned int
nmi_evntsel_msr_to_bit(unsigned int msr)
case 15:
return msr - MSR_P4_BSU_ESCR0;
}
+ break;
+ case X86_VENDOR_ZHAOXIN:
+ case X86_VENDOR_CENTAUR:
+ return msr - MSR_ARCH_PERFMON_EVENTSEL0;
}
return 0;
--
2.20.1
1
0

[PATCH kernel-4.19 2/2] x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.6
commit a84de2fa962c1b0551653fe245d6cb5f6129179c
category: x86/speculation/swapgs
--------------------------------
New Zhaoxin family 7 CPUs are not affected by the SWAPGS vulnerability.
So
mark these CPUs in the cpu vulnerability whitelist accordingly.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Link:
https://lore.kernel.org/r/1579227872-26972-3-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 246f98153240..1d83e5f7c5a8 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1017,8 +1017,8 @@ static const __initconst struct x86_cpu_id
cpu_vuln_whitelist[] = {
VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS |
NO_SWAPGS | NO_ITLB_MULTIHIT),
/* Zhaoxin Family 7 */
- VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2),
- VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2),
+ VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS),
+ VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS),
{}
};
--
2.20.1
1
0

[PATCH kernel-4.19 1/2] x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.6
commit 1e41a766c98b481400ab8c5a7aa8ea63a1bb03de
category: x86/speculation/spectre_v2
--------------------------------
New Zhaoxin family 7 CPUs are not affected by SPECTRE_V2. So define a
separate cpu_vuln_whitelist bit NO_SPECTRE_V2 and add these CPUs to the
cpu
vulnerability whitelist.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Link:
https://lore.kernel.org/r/1579227872-26972-2-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/common.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index a5954a2f8591..246f98153240 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -954,6 +954,7 @@ static void identify_cpu_without_cpuid(struct
cpuinfo_x86 *c)
#define MSBDS_ONLY BIT(5)
#define NO_SWAPGS BIT(6)
#define NO_ITLB_MULTIHIT BIT(7)
+#define NO_SPECTRE_V2 BIT(8)
#define VULNWL(_vendor, _family, _model, _whitelist) \
{ X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
@@ -1014,6 +1015,10 @@ static const __initconst struct x86_cpu_id
cpu_vuln_whitelist[] = {
/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS
| NO_ITLB_MULTIHIT),
VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS |
NO_SWAPGS | NO_ITLB_MULTIHIT),
+
+ /* Zhaoxin Family 7 */
+ VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2),
+ VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2),
{}
};
@@ -1068,7 +1073,9 @@ static void __init cpu_set_bug_bits(struct
cpuinfo_x86 *c)
return;
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
- setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+
+ if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
!(ia32_cap & ARCH_CAP_SSB_NO) &&
--
2.20.1
1
0
mainline inclusion
from mainline-5.5
commit 70f0c230031dfef3c9b3e37b2a8c18d3f7186fb2
category: x86/mce
Add support for more Zhaoxin CPUs.
--------------------------------
Newer Zhaoxin CPUs support LMCE compatible with Intel. Add support for
that.
[ bp: Export functions and massage. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Cc: CooperYan(a)zhaoxin.com
Cc: DavidWang(a)zhaoxin.com
Cc: HerryYang(a)zhaoxin.com
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: linux-edac <linux-edac(a)vger.kernel.org>
Cc: QiyuanWang(a)zhaoxin.com
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: x86-ml <x86(a)kernel.org>
Link:
https://lkml.kernel.org/r/1568787573-1297-5-git-send-email-TonyWWang-oc@zha…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/mce/core.c | 25 +++++++++++++++++++++++--
arch/x86/kernel/cpu/mce/intel.c | 4 ++--
arch/x86/kernel/cpu/mce/internal.h | 4 ++++
3 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 71ec7afbabdf..8534b952af76 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1125,6 +1125,13 @@ static bool __mc_check_crashing_cpu(int cpu)
u64 mcgstatus;
mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS);
+
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN ||
+ boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR) {
+ if (mcgstatus & MCG_STATUS_LMCES)
+ return false;
+ }
+
if (mcgstatus & MCG_STATUS_RIPV) {
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
return true;
@@ -1274,9 +1281,11 @@ void do_machine_check(struct pt_regs *regs, long
error_code)
/*
* Check if this MCE is signaled to only this logical processor,
- * on Intel only.
+ * on Intel, Zhaoxin only.
*/
- if (m.cpuvendor == X86_VENDOR_INTEL)
+ if (m.cpuvendor == X86_VENDOR_INTEL ||
+ m.cpuvendor == X86_VENDOR_ZHAOXIN ||
+ m.cpuvendor == X86_VENDOR_CENTAUR)
lmce = m.mcgstatus & MCG_STATUS_LMCES;
/*
@@ -1745,9 +1754,15 @@ static void mce_zhaoxin_feature_init(struct
cpuinfo_x86 *c)
}
intel_init_cmci();
+ intel_init_lmce();
mce_adjust_timer = cmci_intel_adjust_timer;
}
+static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c)
+{
+ intel_clear_lmce();
+}
+
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
{
switch (c->x86_vendor) {
@@ -1781,6 +1796,12 @@ static void __mcheck_cpu_clear_vendor(struct
cpuinfo_x86 *c)
case X86_VENDOR_INTEL:
mce_intel_feature_clear(c);
break;
+
+ case X86_VENDOR_ZHAOXIN:
+ case X86_VENDOR_CENTAUR:
+ mce_zhaoxin_feature_clear(c);
+ break;
+
default:
break;
}
diff --git a/arch/x86/kernel/cpu/mce/intel.c
b/arch/x86/kernel/cpu/mce/intel.c
index 6a220c999a01..f6f3b2675164 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -445,7 +445,7 @@ void intel_init_cmci(void)
cmci_recheck();
}
-static void intel_init_lmce(void)
+void intel_init_lmce(void)
{
u64 val;
@@ -458,7 +458,7 @@ static void intel_init_lmce(void)
wrmsrl(MSR_IA32_MCG_EXT_CTL, val | MCG_EXT_CTL_LMCE_EN);
}
-static void intel_clear_lmce(void)
+void intel_clear_lmce(void)
{
u64 val;
diff --git a/arch/x86/kernel/cpu/mce/internal.h
b/arch/x86/kernel/cpu/mce/internal.h
index 99d73d18f2c4..22e8aa8c8fe7 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -53,12 +53,16 @@ bool mce_intel_cmci_poll(void);
void mce_intel_hcpu_update(unsigned long cpu);
void cmci_disable_bank(int bank);
void intel_init_cmci(void);
+void intel_init_lmce(void);
+void intel_clear_lmce(void);
#else
# define cmci_intel_adjust_timer mce_adjust_timer_default
static inline bool mce_intel_cmci_poll(void) { return false; }
static inline void mce_intel_hcpu_update(unsigned long cpu) { }
static inline void cmci_disable_bank(int bank) { }
static inline void intel_init_cmci(void) { }
+static inline void intel_init_lmce(void) { }
+static inline void intel_clear_lmce(void) { }
#endif
void mce_timer_kick(unsigned long interval);
--
2.20.1
1
0
mainline inclusion
from mainline-5.5
commit 5a3d56a034be9e8e87a6cb9ed3f2928184db1417
category: x86/mce
Add support for more Zhaoxin CPUs.
--------------------------------
All newer Zhaoxin CPUs support CMCI and are compatible with Intel's
Machine-Check Architecture. Add that support for Zhaoxin CPUs.
[ bp: Massage comments and export intel_init_cmci(). ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Cc: CooperYan(a)zhaoxin.com
Cc: DavidWang(a)zhaoxin.com
Cc: HerryYang(a)zhaoxin.com
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: linux-edac <linux-edac(a)vger.kernel.org>
Cc: QiyuanWang(a)zhaoxin.com
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: x86-ml <x86(a)kernel.org>
Link:
https://lkml.kernel.org/r/1568787573-1297-4-git-send-email-TonyWWang-oc@zha…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/mce/core.c | 30 +++++++++++++++++++-----------
arch/x86/kernel/cpu/mce/intel.c | 7 +++++--
arch/x86/kernel/cpu/mce/internal.h | 2 ++
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index dce0fbd4cb0f..71ec7afbabdf 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1726,19 +1726,26 @@ static void __mcheck_cpu_init_early(struct
cpuinfo_x86 *c)
}
}
-static void mce_centaur_feature_init(struct cpuinfo_x86 *c)
+static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
{
struct mca_config *cfg = &mca_cfg;
-
- /*
- * All newer Centaur CPUs support MCE broadcasting. Enable
- * synchronization with a one second timeout.
- */
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
- c->x86 > 6) {
- if (cfg->monarch_timeout < 0)
- cfg->monarch_timeout = USEC_PER_SEC;
+ /*
+ * These CPUs have MCA bank 8 which reports only one error type called
+ * SVAD (System View Address Decoder). The reporting of that error is
+ * controlled by IA32_MC8.CTL.0.
+ *
+ * If enabled, prefetching on these CPUs will cause SVAD MCE when
+ * virtual machines start and result in a system panic. Always disable
+ * bank 8 SVAD error by default.
+ */
+ if ((c->x86 == 7 && c->x86_model == 0x1b) ||
+ (c->x86_model == 0x19 || c->x86_model == 0x1f)) {
+ if (cfg->banks > 8)
+ mce_banks[8].ctl = 0;
}
+
+ intel_init_cmci();
+ mce_adjust_timer = cmci_intel_adjust_timer;
}
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
@@ -1759,7 +1766,8 @@ static void __mcheck_cpu_init_vendor(struct
cpuinfo_x86 *c)
break;
case X86_VENDOR_CENTAUR:
- mce_centaur_feature_init(c);
+ case X86_VENDOR_ZHAOXIN:
+ mce_zhaoxin_feature_init(c);
break;
default:
diff --git a/arch/x86/kernel/cpu/mce/intel.c
b/arch/x86/kernel/cpu/mce/intel.c
index 693c8cfac75d..6a220c999a01 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -85,8 +85,11 @@ static int cmci_supported(int *banks)
* initialization is vendor keyed and this
* makes sure none of the backdoors are entered otherwise.
*/
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
return 0;
+
if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6)
return 0;
rdmsrl(MSR_IA32_MCG_CAP, cap);
@@ -423,7 +426,7 @@ void cmci_disable_bank(int bank)
raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
}
-static void intel_init_cmci(void)
+void intel_init_cmci(void)
{
int banks;
diff --git a/arch/x86/kernel/cpu/mce/internal.h
b/arch/x86/kernel/cpu/mce/internal.h
index ceb67cd5918f..99d73d18f2c4 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -52,11 +52,13 @@ unsigned long cmci_intel_adjust_timer(unsigned long
interval);
bool mce_intel_cmci_poll(void);
void mce_intel_hcpu_update(unsigned long cpu);
void cmci_disable_bank(int bank);
+void intel_init_cmci(void);
#else
# define cmci_intel_adjust_timer mce_adjust_timer_default
static inline bool mce_intel_cmci_poll(void) { return false; }
static inline void mce_intel_hcpu_update(unsigned long cpu) { }
static inline void cmci_disable_bank(int bank) { }
+static inline void intel_init_cmci(void) { }
#endif
void mce_timer_kick(unsigned long interval);
--
2.20.1
1
0

[PATCH kernel-4.19 2/2] x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy in C3
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/acpi/cstate.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c
index 45745ecaa624..5eebe05b00fb 100644
--- a/arch/x86/kernel/acpi/cstate.c
+++ b/arch/x86/kernel/acpi/cstate.c
@@ -63,6 +63,21 @@ void acpi_processor_power_init_bm_check(struct
acpi_processor_flags *flags,
c->x86_stepping >= 0x0e))
flags->bm_check = 1;
}
+
+ if (c->x86_vendor == X86_VENDOR_ZHAOXIN) {
+ /*
+ * All Zhaoxin CPUs that support C3 share cache.
+ * And caches should not be flushed by software while
+ * entering C3 type state.
+ */
+ flags->bm_check = 1;
+ /*
+ * On all recent Zhaoxin platforms, ARB_DISABLE is a nop.
+ * So, set bm_control to zero to indicate that ARB_DISABLE
+ * is not required while entering C3 type state.
+ */
+ flags->bm_control = 0;
+ }
}
EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
--
2.20.1
1
0

25 Mar '21
mainline inclusion
from mainline-5.2
commit 987ddbe4870b53623d76ac64044c55a13e368113
category: x86/power
--------------------------------
For new Centaur CPUs the ucode will take care of the preservation of
cache coherence
between CPU cores in C-states regardless of how deep the C-states are.
So, it is not
necessary to flush the caches in software befor entering C3. This
useless operation
will cause performance drop for the cores which share some caches with
the idling core.
Signed-off-by: David Wang <davidwang(a)zhaoxin.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Pavel Machek <pavel(a)ucw.cz>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: brucechang(a)via-alliance.com
Cc: cooperyan(a)zhaoxin.com
Cc: len.brown(a)intel.com
Cc: linux-pm(a)kernel.org
Cc: qiyuanwang(a)zhaoxin.com
Cc: rjw(a)rjwysocki.net
Cc: timguo(a)zhaoxin.com
Link:
http://lkml.kernel.org/r/1545900110-2757-1-git-send-email-davidwang@zhaoxin…
[ Tidy up the comment. ]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/acpi/cstate.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c
index 92539a1c3e31..45745ecaa624 100644
--- a/arch/x86/kernel/acpi/cstate.c
+++ b/arch/x86/kernel/acpi/cstate.c
@@ -51,6 +51,18 @@ void acpi_processor_power_init_bm_check(struct
acpi_processor_flags *flags,
if (c->x86_vendor == X86_VENDOR_INTEL &&
(c->x86 > 0xf || (c->x86 == 6 && c->x86_model >= 0x0f)))
flags->bm_control = 0;
+ /*
+ * For all recent Centaur CPUs, the ucode will make sure that each
+ * core can keep cache coherence with each other while entering C3
+ * type state. So, set bm_check to 1 to indicate that the kernel
+ * doesn't need to execute a cache flush operation (WBINVD) when
+ * entering C3 type state.
+ */
+ if (c->x86_vendor == X86_VENDOR_CENTAUR) {
+ if (c->x86 > 6 || (c->x86 == 6 && c->x86_model == 0x0f &&
+ c->x86_stepping >= 0x0e))
+ flags->bm_check = 1;
+ }
}
EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
--
2.20.1
1
0

[PATCH kernel-4.19 6/6] x86/cpu: Add detect extended topology for Zhaoxin CPUs
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
Detect the extended topology information of Zhaoxin CPUs if available.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 20 +++++++++++++++++++-
arch/x86/kernel/cpu/zhaoxin.c | 7 ++++++-
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index 8735be464bc1..49b33cc78751 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -115,6 +115,21 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
}
+
+ if (c->cpuid_level >= 0x00000001) {
+ u32 eax, ebx, ecx, edx;
+
+ cpuid(0x00000001, &eax, &ebx, &ecx, &edx);
+ /*
+ * If HTT (EDX[28]) is set EBX[16:23] contain the number of
+ * apicids which are reserved per package. Store the resulting
+ * shift value for the package management code.
+ */
+ if (edx & (1U << 28))
+ c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
+ }
+ if (detect_extended_topology_early(c) < 0)
+ detect_ht_early(c);
}
static void centaur_detect_vmx_virtcap(struct cpuinfo_x86 *c)
@@ -158,8 +173,11 @@ static void init_centaur(struct cpuinfo_x86 *c)
clear_cpu_cap(c, 0*32+31);
#endif
early_init_centaur(c);
+ detect_extended_topology(c);
init_intel_cacheinfo(c);
- detect_num_cpu_cores(c);
+ if (!cpu_has(c, X86_FEATURE_XTOPOLOGY))
+ detect_num_cpu_cores(c);
+
#ifdef CONFIG_X86_32
detect_ht(c);
#endif
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 452fd0a6bc61..b6fc969b3e74 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -85,6 +85,8 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c)
c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
}
+ if (detect_extended_topology_early(c) < 0)
+ detect_ht_early(c);
}
static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c)
@@ -115,8 +117,11 @@ static void zhaoxin_detect_vmx_virtcap(struct
cpuinfo_x86 *c)
static void init_zhaoxin(struct cpuinfo_x86 *c)
{
early_init_zhaoxin(c);
+ detect_extended_topology(c);
init_intel_cacheinfo(c);
- detect_num_cpu_cores(c);
+ if (!cpu_has(c, X86_FEATURE_XTOPOLOGY))
+ detect_num_cpu_cores(c);
+
#ifdef CONFIG_X86_32
detect_ht(c);
#endif
--
2.20.1
1
0

25 Mar '21
Add Zhaoxin feature bits on Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/include/asm/cpufeatures.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index f7f9604b10cc..48535113efa6 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -145,8 +145,12 @@
#define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001,
word 5 */
+#define X86_FEATURE_SM2 (5*32+0) /* sm2 present*/
+#define X86_FEATURE_SM2_EN (5*32+1) /* sm2 enabled */
#define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present (xstore) */
#define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */
+#define X86_FEATURE_CCS (5*32+4) /* "sm3 sm4" present */
+#define X86_FEATURE_CCS_EN (5*32+5) /* "sm3_en sm4_en" enabled */
#define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto (xcrypt) */
#define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU crypto
enabled */
#define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography Engine v2 */
@@ -155,6 +159,23 @@
#define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */
#define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery Multiplier */
#define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */
+#define X86_FEATURE_ZX_FMA (5*32+15) /* FMA supported */
+#define X86_FEATURE_PARALLAX (5*32+16) /* Adaptive P-state control
present */
+#define X86_FEATURE_PARALLAX_EN (5*32+17) /* Adaptive P-state control
enabled */
+#define X86_FEATURE_OVERSTRESS (5*32+18) /* Overstress Feature for auto
overclock present */
+#define X86_FEATURE_OVERSTRESS_EN (5*32+19) /* Overstress Feature for
auto overclock enabled */
+#define X86_FEATURE_TM3 (5*32+20) /* Thermal Monitor 3 present */
+#define X86_FEATURE_TM3_EN (5*32+21) /* Thermal Monitor 3 enabled */
+#define X86_FEATURE_RNG2 (5*32+22) /* 2nd generation of RNG present */
+#define X86_FEATURE_RNG2_EN (5*32+23) /* 2nd generation of RNG
enabled */
+#define X86_FEATURE_SEM (5*32+24) /* SME feature present */
+#define X86_FEATURE_PHE2 (5*32+25) /* SHA384 and SHA 512 present */
+#define X86_FEATURE_PHE2_EN (5*32+26) /* SHA384 and SHA 512 enabled */
+#define X86_FEATURE_XMODX (5*32+27) /* "rsa" XMODEXP and MONTMUL2
instructions are present */
+#define X86_FEATURE_XMODX_EN (5*32+28) /* "rsa_en" XMODEXP and
MONTMUL2instructions are enabled */
+#define X86_FEATURE_VEX (5*32+29) /* VEX instructions are present */
+#define X86_FEATURE_VEX_EN (5*32+30) /* VEX instructions are enabled */
+#define X86_FEATURE_STK (5*32+31) /* STK are present */
/* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */
#define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */
--
2.20.1
1
0

[PATCH kernel-4.19 4/6] x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.9
commit 33b4711df4c1b3aec7c267c60fc24abccfadd40c
category: x86/cpu
--------------------------------
Add Centaur family >=7 CPUs specific initialization support.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1599562666-31351-3-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index b3be281334e4..8735be464bc1 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -71,6 +71,9 @@ static void init_c3(struct cpuinfo_x86 *c)
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
+
+ if (c->x86 >= 7)
+ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
enum {
@@ -101,7 +104,8 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
if (c->x86 == 5)
set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR);
#endif
- if (c->x86 == 6 && c->x86_model >= 0xf)
+ if ((c->x86 == 6 && c->x86_model >= 0xf) ||
+ (c->x86 >= 7))
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
#ifdef CONFIG_X86_64
@@ -235,7 +239,7 @@ static void init_centaur(struct cpuinfo_x86 *c)
sprintf(c->x86_model_id, "WinChip %s", name);
}
#endif
- if (c->x86 == 6)
+ if (c->x86 == 6 || c->x86 >= 7)
init_c3(c);
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
--
2.20.1
1
0

[PATCH kernel-4.19 3/6] x86/cpu/centaur: Replace two-condition switch-case with an if statement
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.9
commit 8687bdc04128b2bd16faaae11db10128ad0da7b8
category: x86/cpu
--------------------------------
Use a normal if statements instead of a two-condition switch-case.
[ bp: Massage commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1599562666-31351-2-git-send-email-TonyWWang-oc@zh…
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index b98529e50d6f..b3be281334e4 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -96,18 +96,14 @@ enum {
static void early_init_centaur(struct cpuinfo_x86 *c)
{
- switch (c->x86) {
#ifdef CONFIG_X86_32
- case 5:
- /* Emulate MTRRs using Centaur's MCR. */
+ /* Emulate MTRRs using Centaur's MCR. */
+ if (c->x86 == 5)
set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR);
- break;
#endif
- case 6:
- if (c->x86_model >= 0xf)
- set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
- break;
- }
+ if (c->x86 == 6 && c->x86_model >= 0xf)
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#endif
@@ -176,9 +172,8 @@ static void init_centaur(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
}
- switch (c->x86) {
#ifdef CONFIG_X86_32
- case 5:
+ if (c->x86 == 5) {
switch (c->x86_model) {
case 4:
name = "C6";
@@ -238,12 +233,10 @@ static void init_centaur(struct cpuinfo_x86 *c)
c->x86_cache_size = (cc>>24)+(dd>>24);
}
sprintf(c->x86_model_id, "WinChip %s", name);
- break;
+ }
#endif
- case 6:
+ if (c->x86 == 6)
init_c3(c);
- break;
- }
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
#endif
--
2.20.1
1
0

[PATCH kernel-4.19 2/6] x86/cpu: Remove redundant cpu_detect_cache_sizes() call
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.6
commit 283bab9809786cf41798512f5c1e97f4b679ba96
category: x86/cpu
--------------------------------
Both functions call init_intel_cacheinfo() which computes L2 and L3
cache
sizes from CPUID(4). But then they also call cpu_detect_cache_sizes() a
bit later which computes ->x86_tlbsize and L2 size from CPUID(80000006).
However, the latter call is not needed because
- on these CPUs, CPUID(80000006).EBX for ->x86_tlbsize is reserved
- CPUID(80000006).ECX for the L2 size has the same result as CPUID(4)
Therefore, remove the latter call to simplify the code.
[ bp: Rewrite commit message. ]
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Link:
https://lkml.kernel.org/r/1579075257-6985-1-git-send-email-TonyWWang-oc@zha….
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
arch/x86/kernel/cpu/centaur.c | 2 --
arch/x86/kernel/cpu/zhaoxin.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index 14433ff5b828..b98529e50d6f 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -71,8 +71,6 @@ static void init_c3(struct cpuinfo_x86 *c)
c->x86_cache_alignment = c->x86_clflush_size * 2;
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
}
-
- cpu_detect_cache_sizes(c);
}
enum {
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
index 8e6f2f4b4afe..452fd0a6bc61 100644
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -58,8 +58,6 @@ static void init_zhaoxin_cap(struct cpuinfo_x86 *c)
if (c->x86 >= 0x6)
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
-
- cpu_detect_cache_sizes(c);
}
static void early_init_zhaoxin(struct cpuinfo_x86 *c)
--
2.20.1
1
0

[PATCH kernel-4.19 1/6] x86/cpu: Create Zhaoxin processors architecture support file
by LeoLiu-oc 25 Mar '21
by LeoLiu-oc 25 Mar '21
25 Mar '21
mainline inclusion
from mainline-5.2
commit 761fdd5e3327db6c646a09bab5ad48cd42680cd2
category: x86/cpu
--------------------------------
Add x86 architecture support for new Zhaoxin processors.
Carve out initialization code needed by Zhaoxin processors into
a separate compilation unit.
To identify Zhaoxin CPU, add a new vendor type X86_VENDOR_ZHAOXIN
for system recognition.
Signed-off-by: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: "hpa(a)zytor.com" <hpa(a)zytor.com>
Cc: "gregkh(a)linuxfoundation.org" <gregkh(a)linuxfoundation.org>
Cc: "rjw(a)rjwysocki.net" <rjw(a)rjwysocki.net>
Cc: "lenb(a)kernel.org" <lenb(a)kernel.org>
Cc: David Wang <DavidWang(a)zhaoxin.com>
Cc: "Cooper Yan(BJ-RD)" <CooperYan(a)zhaoxin.com>
Cc: "Qiyuan Wang(BJ-RD)" <QiyuanWang(a)zhaoxin.com>
Cc: "Herry Yang(BJ-RD)" <HerryYang(a)zhaoxin.com>
Link:
https://lkml.kernel.org/r/01042674b2f741b2aed1f797359bdffb@zhaoxin.com
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
MAINTAINERS | 6 ++
arch/x86/Kconfig.cpu | 13 +++
arch/x86/include/asm/processor.h | 3 +-
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/zhaoxin.c | 167 +++++++++++++++++++++++++++++++
5 files changed, 189 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
diff --git a/MAINTAINERS b/MAINTAINERS
index ada8fbdd1d71..210fdd54b496 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -16265,6 +16265,12 @@ Q:
https://patchwork.linuxtv.org/project/linux-media/list/
S: Maintained
F: drivers/media/dvb-frontends/zd1301_demod*
+ZHAOXIN PROCESSOR SUPPORT
+M: Tony W Wang-oc <TonyWWang-oc(a)zhaoxin.com>
+L: linux-kernel(a)vger.kernel.org
+S: Maintained
+F: arch/x86/kernel/cpu/zhaoxin.c
+
ZPOOL COMPRESSED PAGE STORAGE API
M: Dan Streetman <ddstreet(a)ieee.org>
L: linux-mm(a)kvack.org
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 76e274a0fd0a..d1a51794c587 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -480,3 +480,16 @@ config CPU_SUP_UMC_32
CPU might render the kernel unbootable.
If unsure, say N.
+
+config CPU_SUP_ZHAOXIN
+ default y
+ bool "Support Zhaoxin processors" if PROCESSOR_SELECT
+ help
+ This enables detection, tunings and quirks for Zhaoxin processors
+
+ You need this enabled if you want your kernel to run on a
+ Zhaoxin CPU. Disabling this option on other types of CPUs
+ makes the kernel a tiny bit smaller. Disabling it on a Zhaoxin
+ CPU might render the kernel unbootable.
+
+ If unsure, say N.
diff --git a/arch/x86/include/asm/processor.h
b/arch/x86/include/asm/processor.h
index af99d4137db9..e5b9308c312f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -156,7 +156,8 @@ enum cpuid_regs_idx {
#define X86_VENDOR_TRANSMETA 7
#define X86_VENDOR_NSC 8
#define X86_VENDOR_HYGON 9
-#define X86_VENDOR_NUM 10
+#define X86_VENDOR_ZHAOXIN 10
+#define X86_VENDOR_NUM 11
#define X86_VENDOR_UNKNOWN 0xff
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index e46d718ba4cc..69bba2b1ef08 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o
obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o
obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o
obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o
+obj-$(CONFIG_CPU_SUP_ZHAOXIN) += zhaoxin.o
obj-$(CONFIG_INTEL_RDT) += intel_rdt.o intel_rdt_rdtgroup.o
intel_rdt_monitor.o
obj-$(CONFIG_INTEL_RDT) += intel_rdt_ctrlmondata.o intel_rdt_pseudo_lock.o
diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c
new file mode 100644
index 000000000000..8e6f2f4b4afe
--- /dev/null
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/sched.h>
+#include <linux/sched/clock.h>
+
+#include <asm/cpufeature.h>
+
+#include "cpu.h"
+
+#define MSR_ZHAOXIN_FCR57 0x00001257
+
+#define ACE_PRESENT (1 << 6)
+#define ACE_ENABLED (1 << 7)
+#define ACE_FCR (1 << 7) /* MSR_ZHAOXIN_FCR */
+
+#define RNG_PRESENT (1 << 2)
+#define RNG_ENABLED (1 << 3)
+#define RNG_ENABLE (1 << 8) /* MSR_ZHAOXIN_RNG */
+
+#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000
+#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000
+#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000
+#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001
+#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002
+#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020
+
+static void init_zhaoxin_cap(struct cpuinfo_x86 *c)
+{
+ u32 lo, hi;
+
+ /* Test for Extended Feature Flags presence */
+ if (cpuid_eax(0xC0000000) >= 0xC0000001) {
+ u32 tmp = cpuid_edx(0xC0000001);
+
+ /* Enable ACE unit, if present and disabled */
+ if ((tmp & (ACE_PRESENT | ACE_ENABLED)) == ACE_PRESENT) {
+ rdmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ /* Enable ACE unit */
+ lo |= ACE_FCR;
+ wrmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ pr_info("CPU: Enabled ACE h/w crypto\n");
+ }
+
+ /* Enable RNG unit, if present and disabled */
+ if ((tmp & (RNG_PRESENT | RNG_ENABLED)) == RNG_PRESENT) {
+ rdmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ /* Enable RNG unit */
+ lo |= RNG_ENABLE;
+ wrmsr(MSR_ZHAOXIN_FCR57, lo, hi);
+ pr_info("CPU: Enabled h/w RNG\n");
+ }
+
+ /*
+ * Store Extended Feature Flags as word 5 of the CPU
+ * capability bit array
+ */
+ c->x86_capability[CPUID_C000_0001_EDX] = cpuid_edx(0xC0000001);
+ }
+
+ if (c->x86 >= 0x6)
+ set_cpu_cap(c, X86_FEATURE_REP_GOOD);
+
+ cpu_detect_cache_sizes(c);
+}
+
+static void early_init_zhaoxin(struct cpuinfo_x86 *c)
+{
+ if (c->x86 >= 0x6)
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+#ifdef CONFIG_X86_64
+ set_cpu_cap(c, X86_FEATURE_SYSENTER32);
+#endif
+ if (c->x86_power & (1 << 8)) {
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
+ }
+
+ if (c->cpuid_level >= 0x00000001) {
+ u32 eax, ebx, ecx, edx;
+
+ cpuid(0x00000001, &eax, &ebx, &ecx, &edx);
+ /*
+ * If HTT (EDX[28]) is set EBX[16:23] contain the number of
+ * apicids which are reserved per package. Store the resulting
+ * shift value for the package management code.
+ */
+ if (edx & (1U << 28))
+ c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
+ }
+
+}
+
+static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c)
+{
+ u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2;
+
+ rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high);
+ msr_ctl = vmx_msr_high | vmx_msr_low;
+
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)
+ set_cpu_cap(c, X86_FEATURE_TPR_SHADOW);
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI)
+ set_cpu_cap(c, X86_FEATURE_VNMI);
+ if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) {
+ rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2,
+ vmx_msr_low, vmx_msr_high);
+ msr_ctl2 = vmx_msr_high | vmx_msr_low;
+ if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) &&
+ (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW))
+ set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY);
+ if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT)
+ set_cpu_cap(c, X86_FEATURE_EPT);
+ if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID)
+ set_cpu_cap(c, X86_FEATURE_VPID);
+ }
+}
+
+static void init_zhaoxin(struct cpuinfo_x86 *c)
+{
+ early_init_zhaoxin(c);
+ init_intel_cacheinfo(c);
+ detect_num_cpu_cores(c);
+#ifdef CONFIG_X86_32
+ detect_ht(c);
+#endif
+
+ if (c->cpuid_level > 9) {
+ unsigned int eax = cpuid_eax(10);
+
+ /*
+ * Check for version and the number of counters
+ * Version(eax[7:0]) can't be 0;
+ * Counters(eax[15:8]) should be greater than 1;
+ */
+ if ((eax & 0xff) && (((eax >> 8) & 0xff) > 1))
+ set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
+ }
+
+ if (c->x86 >= 0x6)
+ init_zhaoxin_cap(c);
+#ifdef CONFIG_X86_64
+ set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
+#endif
+
+ if (cpu_has(c, X86_FEATURE_VMX))
+ zhaoxin_detect_vmx_virtcap(c);
+}
+
+#ifdef CONFIG_X86_32
+static unsigned int
+zhaoxin_size_cache(struct cpuinfo_x86 *c, unsigned int size)
+{
+ return size;
+}
+#endif
+
+static const struct cpu_dev zhaoxin_cpu_dev = {
+ .c_vendor = "zhaoxin",
+ .c_ident = { " Shanghai " },
+ .c_early_init = early_init_zhaoxin,
+ .c_init = init_zhaoxin,
+#ifdef CONFIG_X86_32
+ .legacy_cache_size = zhaoxin_size_cache,
+#endif
+ .c_x86_vendor = X86_VENDOR_ZHAOXIN,
+};
+
+cpu_dev_register(zhaoxin_cpu_dev);
--
2.20.1
1
0
Alan Stern (1):
usb-storage: Add quirk to defeat Kindle's automatic unload
Alexander Shiyan (1):
ASoC: fsl_ssi: Fix TDM slot setup for I2S mode
Arnaldo Carvalho de Melo (3):
tools build feature: Check if get_current_dir_name() is available
tools build feature: Check if eventfd() is available
tools build: Check if gettid() is available before providing helper
Christophe Leroy (1):
powerpc: Force inlining of cpu_has_feature() to avoid build failure
Colin Ian King (1):
usbip: Fix incorrect double assignment to udc->ud.tcp_rx
Dan Carpenter (2):
scsi: lpfc: Fix some error codes in debugfs
iio: adis16400: Fix an error code in adis16400_initial_setup()
Daniel Kobras (1):
sunrpc: fix refcount leak for rpc auth modules
David Sterba (1):
btrfs: fix slab cache flags for free space tree bitmap
Dinghao Liu (1):
iio: gyro: mpu3050: Fix error handling in mpu3050_trigger_handler
Filipe Manana (1):
btrfs: fix race when cloning extent buffer during rewind of an old
root
Greg Kroah-Hartman (1):
Linux 4.19.183
Hui Wang (1):
ALSA: hda: generic: Fix the micmute led init state
Jim Lin (1):
usb: gadget: configfs: Fix KASAN use-after-free
Jiri Olsa (1):
perf tools: Use %define api.pure full instead of %pure-parser
Joe Korty (1):
NFSD: Repair misuse of sv_lock in 5.10.16-rt30.
Johan Hovold (1):
x86/apic/of: Fix CPU devicetree-node lookups
Jonathan Albrieux (1):
iio:adc:qcom-spmi-vadc: add default scale to LR_MUX2_BAT_ID channel
Jonathan Cameron (1):
iio:adc:stm32-adc: Add HAS_IOMEM dependency
Kan Liang (1):
perf/x86/intel: Fix a crash caused by zero PEBS status
Macpaul Lin (1):
USB: replace hardcode maximum usb string length by definition
Nicolas Boichat (2):
vmlinux.lds.h: Create section for protection against instrumentation
lkdtm: don't move ctors to .rodata
Oleg Nesterov (3):
kernel, fs: Introduce and use set_restart_fn() and
arch_set_restart_data()
x86: Move TS_COMPAT back to asm/thread_info.h
x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall()
Pavel Skripkin (1):
net/qrtr: fix __netdev_alloc_skb call
Rafael J. Wysocki (1):
Revert "PM: runtime: Update device status before letting suppliers
suspend"
Sagi Grimberg (2):
nvmet: don't check iosqes,iocqes for discovery controllers
nvme-rdma: fix possible hang when failing to set io queues
Shengjiu Wang (2):
ASoC: ak4458: Add MODULE_DEVICE_TABLE
ASoC: ak5558: Add MODULE_DEVICE_TABLE
Shijie Luo (1):
ext4: fix potential error in ext4_do_update_inode
Thomas Gleixner (2):
x86/ioapic: Ignore IRQ2 again
genirq: Disable interrupts for force threaded handlers
Timo Rothenpieler (1):
svcrdma: disable timeouts on rdma backchannel
Tyrel Datwyler (1):
PCI: rpadlpar: Fix potential drc_name corruption in store functions
Vincent Whitchurch (1):
cifs: Fix preauth hash corruption
Ye Xiang (3):
iio: hid-sensor-humidity: Fix alignment issue of timestamp channel
iio: hid-sensor-prox: Fix scale not correct issue
iio: hid-sensor-temperature: Fix issues of timestamp channel
zhangyi (F) (1):
ext4: do not try to set xattr into ea_inode if value is empty
Makefile | 2 +-
arch/powerpc/include/asm/cpu_has_feature.h | 4 +-
arch/powerpc/kernel/vmlinux.lds.S | 1 +
arch/x86/events/intel/ds.c | 2 +-
arch/x86/include/asm/processor.h | 9 ---
arch/x86/include/asm/thread_info.h | 23 ++++++-
arch/x86/kernel/apic/apic.c | 5 ++
arch/x86/kernel/apic/io_apic.c | 10 +++
arch/x86/kernel/signal.c | 24 +------
drivers/base/power/runtime.c | 62 ++++++++-----------
drivers/iio/adc/Kconfig | 1 +
drivers/iio/adc/qcom-spmi-vadc.c | 2 +-
drivers/iio/gyro/mpu3050-core.c | 2 +
drivers/iio/humidity/hid-sensor-humidity.c | 12 ++--
drivers/iio/imu/adis16400_core.c | 3 +-
drivers/iio/light/hid-sensor-prox.c | 13 +++-
.../iio/temperature/hid-sensor-temperature.c | 14 +++--
drivers/misc/lkdtm/Makefile | 2 +-
drivers/misc/lkdtm/rodata.c | 2 +-
drivers/nvme/host/rdma.c | 7 ++-
drivers/nvme/target/core.c | 17 ++++-
drivers/pci/hotplug/rpadlpar_sysfs.c | 14 ++---
drivers/scsi/lpfc/lpfc_debugfs.c | 4 +-
drivers/usb/gadget/composite.c | 4 +-
drivers/usb/gadget/configfs.c | 16 +++--
drivers/usb/gadget/usbstring.c | 4 +-
drivers/usb/storage/transport.c | 7 +++
drivers/usb/storage/unusual_devs.h | 12 ++++
drivers/usb/usbip/vudc_sysfs.c | 2 +-
fs/btrfs/ctree.c | 2 +
fs/btrfs/inode.c | 2 +-
fs/cifs/transport.c | 7 ++-
fs/ext4/inode.c | 8 +--
fs/ext4/xattr.c | 2 +-
fs/select.c | 10 ++-
include/asm-generic/sections.h | 3 +
include/asm-generic/vmlinux.lds.h | 10 +++
include/linux/compiler.h | 54 ++++++++++++++++
include/linux/compiler_types.h | 6 ++
include/linux/thread_info.h | 13 ++++
include/linux/usb_usual.h | 2 +
include/uapi/linux/usb/ch9.h | 3 +
kernel/futex.c | 3 +-
kernel/irq/manage.c | 4 ++
kernel/time/alarmtimer.c | 2 +-
kernel/time/hrtimer.c | 2 +-
kernel/time/posix-cpu-timers.c | 2 +-
net/qrtr/qrtr.c | 2 +-
net/sunrpc/svc.c | 6 +-
net/sunrpc/svc_xprt.c | 4 +-
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 6 +-
scripts/mod/modpost.c | 2 +-
sound/pci/hda/hda_generic.c | 2 +-
sound/soc/codecs/ak4458.c | 1 +
sound/soc/codecs/ak5558.c | 1 +
sound/soc/fsl/fsl_ssi.c | 6 +-
tools/build/Makefile.feature | 3 +
tools/build/feature/Makefile | 12 ++++
tools/build/feature/test-all.c | 15 +++++
tools/build/feature/test-eventfd.c | 9 +++
.../build/feature/test-get_current_dir_name.c | 10 +++
tools/build/feature/test-gettid.c | 11 ++++
tools/perf/Makefile.config | 12 ++++
tools/perf/jvmti/jvmti_agent.c | 2 +
tools/perf/util/Build | 1 +
tools/perf/util/expr.y | 3 +-
tools/perf/util/get_current_dir_name.c | 18 ++++++
tools/perf/util/parse-events.y | 2 +-
tools/perf/util/util.h | 4 ++
69 files changed, 398 insertions(+), 149 deletions(-)
create mode 100644 tools/build/feature/test-eventfd.c
create mode 100644 tools/build/feature/test-get_current_dir_name.c
create mode 100644 tools/build/feature/test-gettid.c
create mode 100644 tools/perf/util/get_current_dir_name.c
--
2.25.1
1
44
DENG Qingfang (1):
net: dsa: tag_mtk: fix 802.1ad VLAN egress
Florian Fainelli (1):
net: dsa: b53: Support setting learning on port
Greg Kroah-Hartman (1):
Linux 4.19.182
Piotr Krysiuk (4):
bpf: Prohibit alu ops for pointer types not defining ptr_limit
bpf: Fix off-by-one for area size in creating mask to left
bpf: Simplify alu_limit masking for pointer arithmetic
bpf: Add sanity check for upper ptr_limit
Suzuki K Poulose (1):
KVM: arm64: nvhe: Save the SPE context early
Makefile | 2 +-
arch/arm64/include/asm/kvm_hyp.h | 3 +++
arch/arm64/kvm/hyp/debug-sr.c | 24 ++++++++++++++---------
arch/arm64/kvm/hyp/switch.c | 4 +++-
drivers/net/dsa/b53/b53_common.c | 19 ++++++++++++++++++
drivers/net/dsa/b53/b53_regs.h | 1 +
drivers/net/dsa/bcm_sf2.c | 5 -----
kernel/bpf/verifier.c | 33 ++++++++++++++++++++------------
net/dsa/tag_mtk.c | 19 ++++++++++++------
9 files changed, 76 insertions(+), 34 deletions(-)
--
2.25.1
1
8
Adrian Hunter (1):
mmc: core: Fix partition switch time for eMMC
Aleksandr Miloserdov (2):
scsi: target: core: Add cmd length set before cmd complete
scsi: target: core: Prevent underflow for service actions
Andreas Larsson (1):
sparc32: Limit memblock allocation to low memory
Anna-Maria Behnsen (1):
hrtimer: Update softirq_expires_next correctly after
__hrtimer_get_next_event()
Arnd Bergmann (1):
stop_machine: mark helpers __always_inline
Artem Lapkin (1):
drm: meson_drv add shutdown function
Athira Rajeev (1):
powerpc/perf: Record counter overflow always if SAMPLE_IP is unset
Balazs Nemeth (2):
net: check if protocol extracted by virtio_net_hdr_set_proto is
correct
net: avoid infinite loop in mpls_gso_segment when mpls_hlen == 0
Biju Das (2):
media: v4l: vsp1: Fix uif null pointer access
media: v4l: vsp1: Fix bru null pointer access
Boyang Yu (1):
hwmon: (lm90) Fix max6658 sporadic wrong temperature reading
Chaotian Jing (1):
mmc: mediatek: fix race condition between msdc_request_timeout and irq
Christophe JAILLET (1):
mmc: mxs-mmc: Fix a resource leak in an error handling path in
'mxs_mmc_probe()'
Daiyue Zhang (1):
configfs: fix a use-after-free in __configfs_open_file
Dan Carpenter (6):
USB: gadget: u_ether: Fix a configfs return code
staging: rtl8192u: fix ->ssid overflow in r8192_wx_set_scan()
staging: rtl8188eu: prevent ->ssid overflow in rtw_wx_set_scan()
staging: rtl8712: unterminated string leads to read overflow
staging: rtl8188eu: fix potential memory corruption in
rtw_check_beacon_data()
staging: ks7010: prevent buffer overflow in ks_wlan_set_scan()
Daniel Borkmann (1):
net: Fix gro aggregation for udp encaps with zero csum
Daniel Vetter (1):
drm/compat: Clear bounce structures
Daniele Palmas (1):
net: usb: qmi_wwan: allow qmimux add/del with master up
Danielle Ratson (1):
selftests: forwarding: Fix race condition in mirror installation
Dmitry V. Levin (1):
uapi: nfnetlink_cthelper.h: fix userspace compilation error
Eric Dumazet (3):
tcp: annotate tp->copied_seq lockless reads
tcp: annotate tp->write_seq lockless reads
tcp: add sanity tests to TCP_QUEUE_SEQ
Eric Farman (1):
s390/cio: return -EFAULT if copy_to_user() fails
Eric W. Biederman (1):
Revert 95ebabde382c ("capabilities: Don't allow writing ambiguous v3
file capabilities")
Felix Fietkau (1):
ath9k: fix transmitting to stations in dynamic SMPS mode
Forest Crossman (1):
usb: xhci: Fix ASMedia ASM1042A and ASM3242 DMA addressing
Frank Li (1):
mmc: cqhci: Fix random crash when remove mmc module/card
Geert Uytterhoeven (1):
PCI: Fix pci_register_io_range() memory leak
Greg Kroah-Hartman (1):
Linux 4.19.181
Guangbin Huang (1):
net: phy: fix save wrong speed and duplex problem if autoneg is on
Heiko Carstens (1):
s390/smp: __smp_rescan_cpus() - move cpumask away from stack
Ian Abbott (9):
staging: comedi: addi_apci_1032: Fix endian problem for COS sample
staging: comedi: addi_apci_1500: Fix endian problem for command sample
staging: comedi: adv_pci1710: Fix endian problem for AI command data
staging: comedi: das6402: Fix endian problem for AI command data
staging: comedi: das800: Fix endian problem for AI command data
staging: comedi: dmm32at: Fix endian problem for AI command data
staging: comedi: me4000: Fix endian problem for AI command data
staging: comedi: pcl711: Fix endian problem for AI command data
staging: comedi: pcl818: Fix endian problem for AI command data
Ian Rogers (1):
perf traceevent: Ensure read cmdlines are null terminated.
Jakub Kicinski (1):
ethernet: alx: fix order of calls on resume
Jia-Ju Bai (2):
net: qrtr: fix error return code of qrtr_sendmsg()
block: rsxx: fix error return code of rsxx_pci_probe()
Joakim Zhang (4):
can: flexcan: assert FRZ bit in flexcan_chip_freeze()
can: flexcan: enable RX FIFO after FRZ/HALT valid
net: stmmac: stop each tx channel independently
net: stmmac: fix watchdog timeout during suspend/resume stress test
Joe Lawrence (1):
scripts/recordmcount.{c,pl}: support -ffunction-sections .text.*
section names
John Ernberg (1):
ALSA: usb: Add Plantronics C320-M USB ctrl msg delay quirk
Josh Poimboeuf (1):
x86/unwind/orc: Disable KASAN checking in the ORC unwinder, part 2
Juergen Gross (3):
xen/events: reset affinity of 2-level event when tearing it down
xen/events: don't unmask an event channel when an eoi is pending
xen/events: avoid handling the same event on two cpus at the same time
Karan Singhal (1):
USB: serial: cp210x: add ID for Acuity Brands nLight Air Adapter
Keita Suzuki (1):
i40e: Fix memory leak in i40e_probe
Kevin(Yudong) Yang (1):
net/mlx4_en: update moderation when config reset
Khalid Aziz (1):
sparc64: Use arch_validate_flags() to validate ADI flag
Krzysztof Wilczyński (1):
PCI: mediatek: Add missing of_node_put() to fix reference leak
Lee Gibson (2):
staging: rtl8712: Fix possible buffer overflow in r8712_sitesurvey_cmd
staging: rtl8192e: Fix possible buffer overflow in _rtl92e_wx_set_scan
Linus Torvalds (1):
Revert "mm, slub: consider rest of partial list if acquire_slab()
fails"
Lior Ribak (1):
binfmt_misc: fix possible deadlock in bm_register_write
Lorenzo Bianconi (1):
mt76: dma: do not report truncated frames to mac80211
Marc Zyngier (1):
KVM: arm64: Fix exclusive limit for IPA size
Martin Kaiser (1):
PCI: xgene-msi: Fix race in installing chained irq handler
Mathias Nyman (1):
xhci: Improve detection of device initiated wake signal.
Matthew Wilcox (Oracle) (1):
include/linux/sched/mm.h: use rcu_dereference in in_vfork()
Matthias Kaehlcke (1):
usb: dwc3: qcom: Honor wakeup enabled/disabled state
Maxim Mikityanskiy (2):
net: Introduce parse_protocol header_ops callback
media: usbtv: Fix deadlock on suspend
Maximilian Heyne (1):
net: sched: avoid duplicates in classes dump
Mike Christie (1):
scsi: libiscsi: Fix iscsi_prep_scsi_cmd_pdu() error handling
Naveen N. Rao (1):
powerpc/64s: Fix instruction encoding for lis in ppc_function_entry()
Nicholas Piggin (1):
powerpc: improve handling of unrecoverable system reset
Niv Sardi (1):
USB: serial: ch341: add new Product ID
Oleksij Rempel (1):
can: skb: can_skb_set_owner(): fix ref counting if socket was closed
before setting skb ownership
Oliver O'Halloran (1):
powerpc/pci: Add ppc_md.discover_phbs()
Ondrej Mosnacek (1):
NFSv4.2: fix return value of _nfs4_get_security_label()
Ong Boon Leong (1):
net: stmmac: fix incorrect DMA channel intr enable setting of EQoS
v4.10
Paul Cercueil (2):
net: davicom: Fix regulator not turned off on failed probe
net: davicom: Fix regulator not turned off on driver removal
Paul Moore (1):
cipso,calipso: resolve a number of problems with the DOI refcounts
Paulo Alcantara (1):
cifs: return proper error code in statfs(2)
Pavel Skripkin (1):
USB: serial: io_edgeport: fix memory leak in edge_startup
Pete Zaitcev (1):
USB: usblp: fix a hang in poll() if disconnected
Ruslan Bilovol (2):
usb: gadget: f_uac2: always increase endpoint max_packet_size by one
audio slot
usb: gadget: f_uac1: stop playback on function disable
Sebastian Reichel (1):
USB: serial: cp210x: add some more GE USB IDs
Sergey Shtylyov (3):
sh_eth: fix TRSCER mask for SH771x
sh_eth: fix TRSCER mask for R7S9210
sh_eth: fix TRSCER mask for R7S72100
Shuah Khan (6):
usbip: fix stub_dev to check for stream socket
usbip: fix vhci_hcd to check for stream socket
usbip: fix vudc to check for stream socket
usbip: fix stub_dev usbip_sockfd_store() races leading to gpf
usbip: fix vhci_hcd attach_store() races leading to gpf
usbip: fix vudc usbip_sockfd_store races leading to gpf
Stefan Haberland (2):
s390/dasd: fix hanging DASD driver unbind
s390/dasd: fix hanging IO request during DASD driver unbind
Steven J. Magnani (1):
udf: fix silent AED tagLocation corruption
Takashi Iwai (5):
ALSA: hda/hdmi: Cancel pending works before suspend
ALSA: hda: Drop the BATCH workaround for AMD controllers
ALSA: hda: Avoid spurious unsol event handling during S3/S4
ALSA: usb-audio: Fix "cannot get freq eq" errors on Dell AE515 sound
bar
ALSA: usb-audio: Apply the control quirk to Plantronics headsets
Vasily Averin (1):
netfilter: x_tables: gpf inside xt_find_revision()
Wang Qing (1):
s390/cio: return -EFAULT if copy_to_user() fails again
Wolfram Sang (1):
i2c: rcar: optimize cacheline to minimize HW race condition
Xie He (1):
net: lapbether: Remove netif_start_queue / netif_stop_queue
Yorick de Wid (1):
Goodix Fingerprint device is not a modem
Yoshihiro Shimoda (1):
usb: renesas_usbhs: Clear PIPECFG for re-enabling pipe with other
EPNUM
Makefile | 2 +-
arch/powerpc/include/asm/code-patching.h | 2 +-
arch/powerpc/include/asm/machdep.h | 3 +
arch/powerpc/kernel/pci-common.c | 10 ++
arch/powerpc/kernel/traps.c | 5 +-
arch/powerpc/perf/core-book3s.c | 19 ++-
arch/s390/kernel/smp.c | 2 +-
arch/sparc/include/asm/mman.h | 54 +++----
arch/sparc/mm/init_32.c | 3 +
arch/x86/kernel/unwind_orc.c | 12 +-
drivers/block/rsxx/core.c | 1 +
drivers/gpu/drm/drm_ioc32.c | 11 ++
drivers/gpu/drm/meson/meson_drv.c | 11 ++
drivers/hwmon/lm90.c | 42 +++++-
drivers/i2c/busses/i2c-rcar.c | 2 +-
drivers/media/platform/vsp1/vsp1_drm.c | 6 +-
drivers/media/usb/usbtv/usbtv-audio.c | 2 +-
drivers/mmc/core/bus.c | 11 +-
drivers/mmc/core/mmc.c | 15 +-
drivers/mmc/host/mtk-sd.c | 18 +--
drivers/mmc/host/mxs-mmc.c | 2 +-
drivers/net/can/flexcan.c | 12 +-
drivers/net/ethernet/atheros/alx/main.c | 7 +-
drivers/net/ethernet/davicom/dm9000.c | 21 ++-
drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +
.../net/ethernet/mellanox/mlx4/en_ethtool.c | 2 +-
.../net/ethernet/mellanox/mlx4/en_netdev.c | 2 +
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 +
drivers/net/ethernet/renesas/sh_eth.c | 7 +
.../net/ethernet/stmicro/stmmac/dwmac4_dma.c | 19 ++-
.../net/ethernet/stmicro/stmmac/dwmac4_lib.c | 4 -
.../net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +
drivers/net/phy/phy.c | 7 +-
drivers/net/usb/qmi_wwan.c | 14 --
drivers/net/wan/lapbether.c | 3 -
drivers/net/wireless/ath/ath9k/ath9k.h | 3 +-
drivers/net/wireless/ath/ath9k/xmit.c | 6 +
drivers/net/wireless/mediatek/mt76/dma.c | 11 +-
drivers/pci/controller/pci-xgene-msi.c | 10 +-
drivers/pci/controller/pcie-mediatek.c | 7 +-
drivers/pci/pci.c | 4 +
drivers/s390/block/dasd.c | 6 +-
drivers/s390/cio/vfio_ccw_ops.c | 6 +-
drivers/scsi/libiscsi.c | 11 +-
.../staging/comedi/drivers/addi_apci_1032.c | 4 +-
.../staging/comedi/drivers/addi_apci_1500.c | 18 +--
drivers/staging/comedi/drivers/adv_pci1710.c | 10 +-
drivers/staging/comedi/drivers/das6402.c | 2 +-
drivers/staging/comedi/drivers/das800.c | 2 +-
drivers/staging/comedi/drivers/dmm32at.c | 2 +-
drivers/staging/comedi/drivers/me4000.c | 2 +-
drivers/staging/comedi/drivers/pcl711.c | 2 +-
drivers/staging/comedi/drivers/pcl818.c | 2 +-
drivers/staging/ks7010/ks_wlan_net.c | 6 +-
drivers/staging/rtl8188eu/core/rtw_ap.c | 5 +
.../staging/rtl8188eu/os_dep/ioctl_linux.c | 6 +-
drivers/staging/rtl8192e/rtl8192e/rtl_wx.c | 7 +-
drivers/staging/rtl8192u/r8192U_wx.c | 6 +-
drivers/staging/rtl8712/rtl871x_cmd.c | 6 +-
drivers/staging/rtl8712/rtl871x_ioctl_linux.c | 2 +-
drivers/target/target_core_pr.c | 15 +-
drivers/target/target_core_transport.c | 15 +-
drivers/usb/class/cdc-acm.c | 5 +
drivers/usb/class/usblp.c | 16 ++-
drivers/usb/dwc3/dwc3-qcom.c | 7 +-
drivers/usb/gadget/function/f_uac1.c | 1 +
drivers/usb/gadget/function/f_uac2.c | 2 +-
.../usb/gadget/function/u_ether_configfs.h | 5 +-
drivers/usb/host/xhci-pci.c | 8 +-
drivers/usb/host/xhci.c | 16 ++-
drivers/usb/renesas_usbhs/pipe.c | 2 +
drivers/usb/serial/ch341.c | 1 +
drivers/usb/serial/cp210x.c | 3 +
drivers/usb/serial/io_edgeport.c | 26 ++--
drivers/usb/usbip/stub_dev.c | 42 +++++-
drivers/usb/usbip/vhci_sysfs.c | 39 +++++-
drivers/usb/usbip/vudc_sysfs.c | 50 ++++++-
drivers/xen/events/events_2l.c | 22 ++-
drivers/xen/events/events_base.c | 132 +++++++++++++-----
drivers/xen/events/events_fifo.c | 7 -
drivers/xen/events/events_internal.h | 22 ++-
fs/binfmt_misc.c | 29 ++--
fs/cifs/cifsfs.c | 2 +-
fs/configfs/file.c | 6 +-
fs/nfs/nfs4proc.c | 2 +-
fs/udf/inode.c | 9 +-
include/linux/can/skb.h | 8 +-
include/linux/netdevice.h | 10 ++
include/linux/sched/mm.h | 3 +-
include/linux/stop_machine.h | 11 +-
include/linux/virtio_net.h | 7 +-
include/net/tcp.h | 2 +-
include/target/target_core_backend.h | 1 +
.../uapi/linux/netfilter/nfnetlink_cthelper.h | 2 +-
kernel/time/hrtimer.c | 60 +++++---
lib/logic_pio.c | 3 +
mm/slub.c | 2 +-
net/ipv4/cipso_ipv4.c | 11 +-
net/ipv4/tcp.c | 59 ++++----
net/ipv4/tcp_diag.c | 5 +-
net/ipv4/tcp_input.c | 6 +-
net/ipv4/tcp_ipv4.c | 23 +--
net/ipv4/tcp_minisocks.c | 4 +-
net/ipv4/tcp_output.c | 6 +-
net/ipv4/udp_offload.c | 2 +-
net/ipv6/calipso.c | 14 +-
net/ipv6/tcp_ipv6.c | 15 +-
net/mpls/mpls_gso.c | 3 +
net/netfilter/x_tables.c | 6 +-
net/netlabel/netlabel_cipso_v4.c | 3 +
net/qrtr/qrtr.c | 4 +-
net/sched/sch_api.c | 8 +-
scripts/recordmcount.c | 2 +-
scripts/recordmcount.pl | 13 ++
security/commoncap.c | 12 +-
sound/pci/hda/hda_bind.c | 4 +
sound/pci/hda/hda_controller.c | 7 -
sound/pci/hda/patch_hdmi.c | 13 ++
sound/usb/quirks.c | 9 ++
tools/perf/util/trace-event-read.c | 1 +
.../forwarding/mirror_gre_bridge_1d_vlan.sh | 9 ++
virt/kvm/arm/mmu.c | 2 +-
122 files changed, 890 insertions(+), 426 deletions(-)
--
2.25.1
1
120
For this series,
Acked-by: Xie XiuQi <xiexiuqi(a)huawei.com>
@Leizhen,
Please help to apply this two patches to kernel-4.19 & openEuler-1.0-LTS branches.
Thanks.
On 2021/3/9 19:13, zhenpengzheng(a)net-swift.com wrote:
> to谢工,
> 已清理所有提交文件中“huawei” “intel” “HiNIC”等字样与本次提交无关的注释。
>
> toLei工,
> 已删除所有提交文件末尾空行。
>
> 谢谢大家的意见,附件为改动后最新的patch。
>
> 振鹏
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> ****************************************************************
>
> 郑振鹏(zheng zhenpeng)
>
> 北京网迅科技有限公司杭州分公司 软件工程师
>
> 浙江省杭州市西湖区文三路478号华星时代广场A座507室 310013
>
> Beijing WangXun Technology Co., Ltd. Software Engineer.
>
> Room A507, HuaXing Times Square, No.478 West Wensan Road.
>
> West Lake District, Hangzhou City, 310013 ZHEJIANG, P.R.CHINA.
>
>
>
> Office: +86(0571)89807901-8014
>
> Mobile: +86-13656681762
>
> E-Mail: z <mailto:jianwang@trustnetic.com>henpengzheng(a)net-swift.com
>
> ****************************************************************
>
>
> *发件人:* Leizhen (ThunderTown) <mailto:thunder.leizhen@huawei.com>
> *发送时间:* 2021-03-09 17:59
> *收件人:* zhenpengzheng(a)net-swift.com <mailto:zhenpengzheng@net-swift.com>; Xie XiuQi <mailto:xiexiuqi@huawei.com>
> *抄送:* liuyuan36 <mailto:liuyuan36@huawei.com>; Cheng Jian <mailto:cj.chengjian@huawei.com>; Libin (Huawei) <mailto:huawei.libin@huawei.com>; Yang Yingliang <mailto:yangyingliang@huawei.com>; Dukaitian (Dukaitian, Intelligent Computing R&D) <mailto:dukaitian@huawei.com>; neil.yao(a)huawei.com <mailto:neil.yao@huawei.com>
> *主题:* Re: Fwd: 网迅万兆网卡驱动合入openeuler-4.19内核申请
>
>
> On 2021/3/9 16:38, zhenpengzheng(a)net-swift.com wrote:
> > 谢工,lei工
> > 您好,附件是我按照社区要求整改的patch,第一份较大的patch是不包含openeuler_config(x86)文件修改的驱动主代码,第二份patch仅含openeuler_config(x86)文件修改,arm的配置暂未修改,后续我尽快测完再发patch开启arm配置。
> >
> > 针对社区反馈的问题1,我查过patch中确实存在一处告警所说明的违规(即指向patch90行那处),其余违规处对照后未发现异常,我检查过代码本身应无段尾出现空白的情况,不知道如何进一步确认patch是否合规,请在邮件中告知,谢谢。
> drivers/net/ethernet/netswift/txgbe/txgbe_bp.h
> drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
> drivers/net/ethernet/netswift/txgbe/txgbe_hw.c
> drivers/net/ethernet/netswift/txgbe/txgbe_lib.c
> drivers/net/ethernet/netswift/txgbe/txgbe_main.c
> drivers/net/ethernet/netswift/txgbe/txgbe_mtd.c
> drivers/net/ethernet/netswift/txgbe/txgbe_mtd.h
>
> 这个告警是文件末尾有空行。每个文件打开一下,shift+G到末尾check下就好了。
>
> >
> > 振鹏
> >
> >
> >
> >
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> >
> > ****************************************************************
> >
> > 郑振鹏(zheng zhenpeng)
> >
> > 北京网迅科技有限公司杭州分公司 软件工程师
> >
> > 浙江省杭州市西湖区文三路478号华星时代广场A座507室 310013
> >
> > Beijing WangXun Technology Co., Ltd. Software Engineer.
> >
> > Room A507, HuaXing Times Square, No.478 West Wensan Road.
> >
> > West Lake District, Hangzhou City, 310013 ZHEJIANG, P.R.CHINA.
> >
> >
> >
> > Office: +86(0571)89807901-8014
> >
> > Mobile: +86-13656681762
> >
> > E-Mail: z <mailto:jianwang@trustnetic.com>henpengzheng(a)net-swift.com
> >
> > ****************************************************************
> >
> >
> > *发件人:* Xie XiuQi <mailto:xiexiuqi@huawei.com>
> > *发送时间:* 2021-03-08 16:29
> > *收件人:* zhenpengzheng(a)net-swift.com <mailto:zhenpengzheng@net-swift.com>; Leizhen (ThunderTown) <mailto:thunder.leizhen@huawei.com>
> > *抄送:* liuyuan36 <mailto:liuyuan36@huawei.com>; Cheng Jian <mailto:cj.chengjian@huawei.com>; Libin (Huawei) <mailto:huawei.libin@huawei.com>; Yang Yingliang <mailto:yangyingliang@huawei.com>; Dukaitian (Dukaitian, Intelligent Computing R&D) <mailto:dukaitian@huawei.com>; neil.yao(a)huawei.com <mailto:neil.yao@huawei.com>
> > *主题:* Re: Fwd: 网迅万兆网卡驱动合入openeuler-4.19内核申请
> > Hi,
> >
> > On 2021/3/8 11:12, zhenpengzheng(a)net-swift.com wrote:
> > > 适用arm64,但这份驱动我只在x86指令集的机器上测试过,我先改好,稍后安排ARM的适配测试。
> >
> > 好的,在 x86 上测试过,就先在 x86 上 enable 吧。
> > ARM64 的测试过之后,可以再 enable。
> >
> > >
> > >
> >
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > >
> > > ****************************************************************
> > >
> > > 郑振鹏(zheng zhenpeng)
> > >
> > > 北京网迅科技有限公司杭州分公司 软件工程师
> > >
> > > 浙江省杭州市西湖区文三路478号华星时代广场A座507室 310013
> > >
> > > Beijing WangXun Technology Co., Ltd. Software Engineer.
> > >
> > > Room A507, HuaXing Times Square, No.478 West Wensan Road.
> > >
> > > West Lake District, Hangzhou City, 310013 ZHEJIANG, P.R.CHINA.
> > >
> > >
> > >
> > > Office: +86(0571)89807901-8014
> > >
> > > Mobile: +86-13656681762
> > >
> > > E-Mail: z <mailto:jianwang@trustnetic.com>henpengzheng(a)net-swift.com
> > >
> > > ****************************************************************
> > >
> > >
> > > *发件人:* Xie XiuQi <mailto:xiexiuqi@huawei.com>
> > > *发送时间:* 2021-03-08 10:30
> > > *收件人:* zhenpengzheng(a)net-swift.com <mailto:zhenpengzheng@net-swift.com>; Leizhen (ThunderTown) <mailto:thunder.leizhen@huawei.com>
> > > *抄送:* liuyuan36 <mailto:liuyuan36@huawei.com>; Cheng Jian <mailto:cj.chengjian@huawei.com>; Libin (Huawei) <mailto:huawei.libin@huawei.com>; Yang Yingliang <mailto:yangyingliang@huawei.com>; Dukaitian (Dukaitian, Intelligent Computing R&D) <mailto:dukaitian@huawei.com>; neil.yao(a)huawei.com <mailto:neil.yao@huawei.com>
> > > *主题:* Re: Fwd: 网迅万兆网卡驱动合入openeuler-4.19内核申请
> > > Hi,
> > >
> > > 网讯网卡驱动在 arm64 上也是适用的吧?
> > > 是的话,arm64 的config,也打开吧。
> > >
> > > arch/arm64/configs/openeuler_defconfig
> > >
> > >
> > >
> > > On 2021/3/8 9:42, zhenpengzheng(a)net-swift.com wrote:
> > > > 好的,对于问题2,我分割成两个patch再发一次,谢谢。
> > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > > >
> > > > ****************************************************************
> > > >
> > > > 郑振鹏(zheng zhenpeng)
> > > >
> > > > 北京网迅科技有限公司杭州分公司 软件工程师
> > > >
> > > > 浙江省杭州市西湖区文三路478号华星时代广场A座507室 310013
> > > >
> > > > Beijing WangXun Technology Co., Ltd. Software Engineer.
> > > >
> > > > Room A507, HuaXing Times Square, No.478 West Wensan Road.
> > > >
> > > > West Lake District, Hangzhou City, 310013 ZHEJIANG, P.R.CHINA.
> > > >
> > > >
> > > >
> > > > Office: +86(0571)89807901-8014
> > > >
> > > > Mobile: +86-13656681762
> > > >
> > > > E-Mail: z <mailto:jianwang@trustnetic.com>henpengzheng(a)net-swift.com
> > > >
> > > > ****************************************************************
> > > >
> > > >
> > > > *发件人:* Leizhen (ThunderTown) <mailto:thunder.leizhen@huawei.com>
> > > > *发送时间:* 2021-03-06 15:05
> > > > *收件人:* Xie XiuQi <mailto:xiexiuqi@huawei.com>; 郑振鹏 <mailto:zhenpengzheng@net-swift.com>
> > > > *抄送:* Liuyuan (Compatibility, Cloud Infrastructure Service Product Dept.) <mailto:liuyuan36@huawei.com>; Cheng Jian <mailto:cj.chengjian@huawei.com>; Libin (Huawei) <mailto:huawei.libin@huawei.com>; Yang Yingliang <mailto:yangyingliang@huawei.com>; Dukaitian (Dukaitian, Intelligent Computing R&D) <mailto:dukaitian@huawei.com>; neil.yao(a)huawei.com <mailto:neil.yao@huawei.com>
> > > > *主题:* Re: Fwd: 网迅万兆网卡驱动合入openeuler-4.19内核申请
> > > > Hi 振鹏,
> > > > 我review了一下补丁,有几个地方需要改进下:
> > > >
> > > > 1. git am补丁的时候会报几个warning,需要消除一下;
> > > > git am 0001-add-WangXun-XGIG-NIC-driver-for-EulerOS.patch
> > > > Applying: add WangXun XGIG NIC driver for EulerOS
> > > > .git/rebase-apply/patch:90: new blank line at EOF.
> > > > +
> > > > .git/rebase-apply/patch:2331: new blank line at EOF.
> > > > +
> > > > .git/rebase-apply/patch:5755: new blank line at EOF.
> > > > +
> > > > .git/rebase-apply/patch:12891: new blank line at EOF.
> > > > +
> > > > .git/rebase-apply/patch:14134: new blank line at EOF.
> > > > +
> > > > warning: squelched 3 whitespace errors
> > > > warning: 8 lines add whitespace errors.
> > > > 2. 对arch/x86/configs/openeuler_defconfig文件的修改,最好能拆分到一个独立的补丁中去。
> > > > 3. txgbe_bp.c没有添加版权声明。
> > > > 4. 其它#if 0、//注释掉的代码,最好能清掉。
> > > >
> > > >
> > > >
> > > > On 2021/3/1 16:32, Xie XiuQi wrote:
> > > > > Hi 振鹏,
> > > > >
> > > > > Thanks for your patch, we'll review this patch, and give a feedback soon.
> > > > >
> > > > > ---
> > > > > Thanks,
> > > > > Xie XiuQi
> > > > >
> > > > >
> > > > > -------- Forwarded Message --------
> > > > > Subject: 网迅万兆网卡驱动合入openeuler-4.19内核申请
> > > > > Date: Mon, 1 Mar 2021 15:41:09 +0800
> > > > > From: zhenpengzheng(a)net-swift.com <zhenpengzheng(a)net-swift.com>
> > > > > To: xiexiuqi <xiexiuqi(a)huawei.com>, liuyuan36 <liuyuan36(a)huawei.com>
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 谢工,刘工:
> > > > >
> > > > > 您好,我已准备好patch,见附件,patch已按照社区要求做checkpatch,报告的错误已消除,commit信息已更新。
> > > > >
> > > > > 振鹏
> > > > >
> > > > >
> > > >
> > >
> >
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > > > >
> > > > > ****************************************************************
> > > > >
> > > > > 郑振鹏(zheng zhenpeng)
> > > > >
> > > > > 北京网迅科技有限公司杭州分公司 软件工程师
> > > > >
> > > > > 浙江省杭州市西湖区文三路478号华星时代广场A座507室 310013
> > > > >
> > > > > Beijing WangXun Technology Co., Ltd. Software Engineer.
> > > > >
> > > > > Room A507, HuaXing Times Square, No.478 West Wensan Road.
> > > > >
> > > > > West Lake District, Hangzhou City, 310013 ZHEJIANG, P.R.CHINA.
> > > > >
> > > > >
> > > > >
> > > > > Office: +86(0571)89807901-8014
> > > > >
> > > > > Mobile: +86-13656681762
> > > > >
> > > > > E-Mail: z <mailto:jianwang@trustnetic.com>henpengzheng(a)net-swift.com
> > > > >
> > > > > ****************************************************************
> > > > >
> > > >
> > >
> >
>
2
1
Alexander Lobakin (1):
net: dsa: add GRO support via gro_cells
Andrey Ryabinin (1):
iommu/amd: Fix sleeping in atomic in increase_address_space()
AngeloGioacchino Del Regno (1):
drm/msm/a5xx: Remove overwriting A5XX_PC_DBG_ECO_CNTL register
Antonio Borneo (1):
usbip: tools: fix build error for multiple definition
Aswath Govindraju (1):
misc: eeprom_93xx46: Add quirk to support Microchip 93LC46B eeprom
Bjorn Helgaas (1):
PCI: Add function 1 DMA alias quirk for Marvell 9215 SATA controller
Chris Chiu (1):
ASoC: Intel: bytcr_rt5640: Add quirk for ARCHOS Cesium 140
Colin Ian King (1):
ALSA: ctxfi: cthw20k2: fix mask on conf to allow 4 bits
Dan Carpenter (2):
btrfs: validate qgroup inherit for SNAP_CREATE_V2 ioctl
rsxx: Return -EFAULT if copy_to_user() fails
Daniel Lee Kruse (1):
media: cx23885: add more quirks for reset DMA on some AMD IOMMU
David Sterba (1):
btrfs: raid56: simplify tracking of Q stripe presence
Ethan Warth (1):
HID: mf: add support for 0079:1846 Mayflash/Dragonrise USB Gamecube
Adapter
Greg Kroah-Hartman (1):
Linux 4.19.180
Hannes Reinecke (5):
block: genhd: add 'groups' argument to device_add_disk
nvme: register ns_id attributes as default sysfs groups
aoe: register default groups with device_add_disk()
zram: register default groups with device_add_disk()
virtio-blk: modernize sysfs attribute creation
Hans de Goede (6):
platform/x86: acer-wmi: Cleanup ACER_CAP_FOO defines
platform/x86: acer-wmi: Cleanup accelerometer device handling
platform/x86: acer-wmi: Add new force_caps module parameter
platform/x86: acer-wmi: Add ACER_CAP_SET_FUNCTION_MODE capability flag
platform/x86: acer-wmi: Add support for SW_TABLET_MODE on Switch
devices
platform/x86: acer-wmi: Add ACER_CAP_KBD_DOCK quirk for the Aspire
Switch 10E SW3-016
Heiner Kallweit (1):
r8169: fix resuming from suspend on RTL8105e if machine runs on
battery
Ira Weiny (1):
btrfs: fix raid6 qstripe kmap
Jeffle Xu (4):
Revert "zram: close udev startup race condition as default groups"
dm table: fix iterate_devices based device capability checks
dm table: fix DAX iterate_devices based device capability checks
dm table: fix zoned iterate_devices based device capability checks
Jisheng Zhang (1):
mmc: sdhci-of-dwcmshc: set SDHCI_QUIRK2_PRESET_VALUE_BROKEN
Julian Braha (1):
RDMA/rxe: Fix missing kconfig dependency on CRYPTO
Kevin Wang (1):
drm/amdgpu: fix parameter error of RREG32_PCIE() in amdgpu_regs_pcie
Mikulas Patocka (1):
dm bufio: subtract the number of initial sectors in
dm_bufio_get_device_size
Milan Broz (1):
dm verity: fix FEC for RS roots unaligned to block size
Nikolay Borisov (2):
btrfs: free correct amount of space in
btrfs_delayed_inode_reserve_metadata
btrfs: unlock extents in btrfs_zero_range in case of quota reservation
errors
Rafael J. Wysocki (1):
PM: runtime: Update device status before letting suppliers suspend
Tsuchiya Yuto (1):
mwifiex: pcie: skip cancel_work_sync() on reset failure path
Yang Yingliang (2):
Revert "virtio-blk: modernize sysfs attribute creation"
Revert "nvme: register ns_id attributes as default sysfs groups"
Makefile | 2 +-
arch/um/drivers/ubd_kern.c | 2 +-
block/genhd.c | 19 ++-
drivers/base/power/runtime.c | 62 ++++---
drivers/block/aoe/aoe.h | 1 -
drivers/block/aoe/aoeblk.c | 21 +--
drivers/block/aoe/aoedev.c | 1 -
drivers/block/floppy.c | 2 +-
drivers/block/mtip32xx/mtip32xx.c | 2 +-
drivers/block/ps3disk.c | 2 +-
drivers/block/ps3vram.c | 2 +-
drivers/block/rsxx/core.c | 8 +-
drivers/block/rsxx/dev.c | 2 +-
drivers/block/skd_main.c | 2 +-
drivers/block/sunvdc.c | 2 +-
drivers/block/virtio_blk.c | 3 +-
drivers/block/xen-blkfront.c | 2 +-
drivers/block/zram/zram_drv.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 -
drivers/hid/hid-ids.h | 1 +
drivers/hid/hid-mf.c | 2 +
drivers/hid/hid-quirks.c | 2 +
drivers/ide/ide-cd.c | 2 +-
drivers/ide/ide-gd.c | 2 +-
drivers/infiniband/sw/rxe/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 10 +-
drivers/md/dm-bufio.c | 4 +
drivers/md/dm-table.c | 174 ++++++++------------
drivers/md/dm-verity-fec.c | 23 +--
drivers/media/pci/cx23885/cx23885-core.c | 4 +
drivers/memstick/core/ms_block.c | 2 +-
drivers/memstick/core/mspro_block.c | 2 +-
drivers/misc/eeprom/eeprom_93xx46.c | 15 ++
drivers/mmc/core/block.c | 2 +-
drivers/mmc/host/sdhci-of-dwcmshc.c | 1 +
drivers/mtd/mtd_blkdevs.c | 2 +-
drivers/net/ethernet/realtek/r8169.c | 2 +
drivers/net/wireless/marvell/mwifiex/pcie.c | 18 +-
drivers/net/wireless/marvell/mwifiex/pcie.h | 2 +
drivers/nvdimm/blk.c | 2 +-
drivers/nvdimm/btt.c | 2 +-
drivers/nvdimm/pmem.c | 2 +-
drivers/nvme/host/core.c | 3 +-
drivers/nvme/host/multipath.c | 8 +-
drivers/pci/quirks.c | 3 +
drivers/platform/x86/acer-wmi.c | 169 +++++++++++++++----
drivers/s390/block/dasd_genhd.c | 2 +-
drivers/s390/block/dcssblk.c | 2 +-
drivers/s390/block/scm_blk.c | 2 +-
drivers/scsi/sd.c | 2 +-
drivers/scsi/sr.c | 2 +-
fs/btrfs/delayed-inode.c | 2 +-
fs/btrfs/file.c | 5 +-
fs/btrfs/ioctl.c | 19 ++-
fs/btrfs/raid56.c | 58 +++----
include/linux/eeprom_93xx46.h | 2 +
include/linux/genhd.h | 5 +-
net/dsa/Kconfig | 1 +
net/dsa/dsa.c | 2 +-
net/dsa/dsa_priv.h | 3 +
net/dsa/slave.c | 10 +-
sound/pci/ctxfi/cthw20k2.c | 2 +-
sound/soc/intel/boards/bytcr_rt5640.c | 12 ++
tools/usb/usbip/libsrc/usbip_host_common.c | 2 +-
65 files changed, 462 insertions(+), 276 deletions(-)
--
2.25.1
1
42

[PATCH kernel-4.19 1/4] ext4: Fix bug on in ext4_es_cache_extent as ext4_split_extent_at failed
by Yang Yingliang 25 Mar '21
by Yang Yingliang 25 Mar '21
25 Mar '21
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 50785
CVE: NA
-----------------------------------------------
We got follow bug_on:
[130747.323114] kernel BUG at fs/ext4/extents_status.c:762!
[130747.323117] Internal error: Oops - BUG: 0 [#1] SMP
......
[130747.334329] Call trace:
[130747.334553] ext4_es_cache_extent+0x150/0x168 [ext4]
[130747.334975] ext4_cache_extents+0x64/0xe8 [ext4]
[130747.335368] ext4_find_extent+0x300/0x330 [ext4]
[130747.335759] ext4_ext_map_blocks+0x74/0x1178 [ext4]
[130747.336179] ext4_map_blocks+0x2f4/0x5f0 [ext4]
[130747.336567] ext4_mpage_readpages+0x4a8/0x7a8 [ext4]
[130747.336995] ext4_readpage+0x54/0x100 [ext4]
[130747.337359] generic_file_buffered_read+0x410/0xae8
[130747.337767] generic_file_read_iter+0x114/0x190
[130747.338152] ext4_file_read_iter+0x5c/0x140 [ext4]
[130747.338556] __vfs_read+0x11c/0x188
[130747.338851] vfs_read+0x94/0x150
[130747.339110] ksys_read+0x74/0xf0
If call ext4_ext_insert_extent failed but new extent already inserted, we just
update "ex->ee_len = orig_ex.ee_len", this will lead to extent overlap, then
cause bug on when cache extent.
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/extents.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index ebf024258e3c2..3bc2cb4cc5cc5 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -3331,7 +3331,7 @@ static int ext4_split_extent_at(handle_t *handle,
goto out;
} else if (err)
- goto fix_extent_len;
+ goto err;
out:
ext4_ext_show_leaf(inode, path);
@@ -3339,6 +3339,7 @@ static int ext4_split_extent_at(handle_t *handle,
fix_extent_len:
ex->ee_len = orig_ex.ee_len;
+err:
ext4_ext_dirty(handle, inode, path + path->p_depth);
return err;
}
--
2.25.1
1
3

[PATCH openEuler-21.03] arm64: amend config position of park and pmem
by sangyan@huawei.com 24 Mar '21
by sangyan@huawei.com 24 Mar '21
24 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
------------------------------
Move config of ARM64_CPU_PARK into menu "Kernel features"
and beside CRASH_DUMP.
Move config of ARM64_PMEM_RESERVE ARM64_PMEM_LEGACY_DEVICE
into menu "Kernel features".
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/Kconfig | 66 +++++++++++++++++-----------------
arch/arm64/configs/openeuler_defconfig | 6 ++--
2 files changed, 36 insertions(+), 36 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 326f26d..72c7ce3 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -345,42 +345,9 @@ config KASAN_SHADOW_OFFSET
default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
-config ARM64_CPU_PARK
- bool "Support CPU PARK on kexec"
- depends on SMP
- depends on KEXEC_CORE
- help
- This enables support for CPU PARK feature in
- order to save time of cpu down to up.
- CPU park is a state through kexec, spin loop
- instead of cpu die before jumping to new kernel,
- jumping out from loop to new kernel entry in
- smp_init.
-
config ARCH_HAS_CPU_RELAX
def_bool y
-config ARM64_PMEM_RESERVE
- bool "Reserve memory for persistent storage"
- default n
- help
- Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
- memory for persistent storage.
-
- Say y here to enable this feature.
-
-config ARM64_PMEM_LEGACY_DEVICE
- bool "Create persistent storage"
- depends on BLK_DEV
- depends on LIBNVDIMM
- select ARM64_PMEM_RESERVE
- help
- Use reserved memory for persistent storage when the kernel
- restart or update. the data in PMEM will not be lost and
- can be loaded faster.
-
- Say y if unsure.
-
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
@@ -1196,6 +1163,18 @@ config CRASH_DUMP
For more details see Documentation/admin-guide/kdump/kdump.rst
+config ARM64_CPU_PARK
+ bool "Support CPU PARK on kexec"
+ depends on SMP
+ depends on KEXEC_CORE
+ help
+ This enables support for CPU PARK feature in
+ order to save time of cpu down to up.
+ CPU park is a state through kexec, spin loop
+ instead of cpu die before jumping to new kernel,
+ jumping out from loop to new kernel entry in
+ smp_init.
+
config XEN_DOM0
def_bool y
depends on XEN
@@ -1257,6 +1236,27 @@ config RODATA_FULL_DEFAULT_ENABLED
This requires the linear region to be mapped down to pages,
which may adversely affect performance in some cases.
+config ARM64_PMEM_RESERVE
+ bool "Reserve memory for persistent storage"
+ default n
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
+ memory for persistent storage.
+
+ Say y here to enable this feature.
+
+config ARM64_PMEM_LEGACY_DEVICE
+ bool "Create persistent storage"
+ depends on BLK_DEV
+ depends on LIBNVDIMM
+ select ARM64_PMEM_RESERVE
+ help
+ Use reserved memory for persistent storage when the kernel
+ restart or update. the data in PMEM will not be lost and
+ can be loaded faster.
+
+ Say y if unsure.
+
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
help
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 21c5d11..1aedaf3 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -275,9 +275,6 @@ CONFIG_FIX_EARLYCON_MEM=y
CONFIG_PGTABLE_LEVELS=3
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
-CONFIG_ARM64_CPU_PARK=y
-CONFIG_ARM64_PMEM_RESERVE=y
-CONFIG_ARM64_PMEM_LEGACY_DEVICE=y
#
# Platform selection
@@ -407,10 +404,13 @@ CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_KEXEC=y
# CONFIG_KEXEC_FILE is not set
CONFIG_CRASH_DUMP=y
+CONFIG_ARM64_CPU_PARK=y
# CONFIG_XEN is not set
CONFIG_FORCE_MAX_ZONEORDER=14
CONFIG_UNMAP_KERNEL_AT_EL0=y
CONFIG_RODATA_FULL_DEFAULT_ENABLED=y
+CONFIG_ARM64_PMEM_RESERVE=y
+CONFIG_ARM64_PMEM_LEGACY_DEVICE=y
# CONFIG_ARM64_SW_TTBR0_PAN is not set
CONFIG_ARM64_TAGGED_ADDR_ABI=y
CONFIG_ARM64_ILP32=y
--
2.9.5
2
1

23 Mar '21
driver inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BNT6
CVE: NA
-----------------------------------------------
We can get a crash when disconnecting the iSCSI session,
the call trace like this:
[ffff00002a00fb70] kfree at ffff00000830e224
[ffff00002a00fba0] ses_intf_remove at ffff000001f200e4
[ffff00002a00fbd0] device_del at ffff0000086b6a98
[ffff00002a00fc50] device_unregister at ffff0000086b6d58
[ffff00002a00fc70] __scsi_remove_device at ffff00000870608c
[ffff00002a00fca0] scsi_remove_device at ffff000008706134
[ffff00002a00fcc0] __scsi_remove_target at ffff0000087062e4
[ffff00002a00fd10] scsi_remove_target at ffff0000087064c0
[ffff00002a00fd70] __iscsi_unbind_session at ffff000001c872c4
[ffff00002a00fdb0] process_one_work at ffff00000810f35c
[ffff00002a00fe00] worker_thread at ffff00000810f648
[ffff00002a00fe70] kthread at ffff000008116e98
In ses_intf_add, components count could be 0, and kcalloc 0 size scomp,
but not saved in edev->component[i].scratch
In this situation, edev->component[0].scratch is an invalid pointer,
when kfree it in ses_intf_remove_enclosure, a crash like above would happen
The call trace also could be other random cases when kfree cannot catch
the invalid pointer
We should not use edev->component[] array when the components count is 0
We also need check index when use edev->component[] array in
ses_enclosure_data_process
Another fix option is report error and do not attach in ses_intf_add if we
meet a zero component enclosure
Tested-by: Zeng Zhicong <timmyzeng(a)163.com>
Signed-off-by: Ding Hui <dinghui(a)sangfor.com.cn>
---
drivers/scsi/ses.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
index 0fc39224ce1e..63764fe0b34f 100644
--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -493,9 +493,6 @@ static int ses_enclosure_find_by_addr(struct enclosure_device *edev,
int i;
struct ses_component *scomp;
- if (!edev->component[0].scratch)
- return 0;
-
for (i = 0; i < edev->components; i++) {
scomp = edev->component[i].scratch;
if (scomp->addr != efd->addr)
@@ -581,8 +578,10 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
components++,
type_ptr[0],
name);
- else
+ else if (components < edev->components)
ecomp = &edev->component[components++];
+ else
+ ecomp = ERR_PTR(-EINVAL);
if (!IS_ERR(ecomp)) {
if (addl_desc_ptr)
@@ -747,9 +746,11 @@ static int ses_intf_add(struct device *cdev,
buf = NULL;
}
page2_not_supported:
- scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
- if (!scomp)
- goto err_free;
+ if (components > 0) {
+ scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
+ if (!scomp)
+ goto err_free;
+ }
edev = enclosure_register(cdev->parent, dev_name(&sdev->sdev_gendev),
components, &ses_enclosure_callbacks);
@@ -829,7 +830,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
kfree(ses_dev->page2);
kfree(ses_dev);
- kfree(edev->component[0].scratch);
+ if (edev->components > 0)
+ kfree(edev->component[0].scratch);
put_device(&edev->edev);
enclosure_unregister(edev);
--
2.17.1
1
0
driver inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BNT6
CVE: NA
-----------------------------------------------
We can get a crash when disconnecting the iSCSI session,
the call trace like this:
[ffff00002a00fb70] kfree at ffff00000830e224
[ffff00002a00fba0] ses_intf_remove at ffff000001f200e4
[ffff00002a00fbd0] device_del at ffff0000086b6a98
[ffff00002a00fc50] device_unregister at ffff0000086b6d58
[ffff00002a00fc70] __scsi_remove_device at ffff00000870608c
[ffff00002a00fca0] scsi_remove_device at ffff000008706134
[ffff00002a00fcc0] __scsi_remove_target at ffff0000087062e4
[ffff00002a00fd10] scsi_remove_target at ffff0000087064c0
[ffff00002a00fd70] __iscsi_unbind_session at ffff000001c872c4
[ffff00002a00fdb0] process_one_work at ffff00000810f35c
[ffff00002a00fe00] worker_thread at ffff00000810f648
[ffff00002a00fe70] kthread at ffff000008116e98
In ses_intf_add, components count could be 0, and kcalloc 0 size scomp,
but not saved in edev->component[i].scratch
In this situation, edev->component[0].scratch is an invalid pointer,
when kfree it in ses_intf_remove_enclosure, a crash like above would happen
The call trace also could be other random cases when kfree cannot catch
the invalid pointer
We should not use edev->component[] array when the components count is 0
We also need check index when use edev->component[] array in
ses_enclosure_data_process
Another fix option is report error and do not attach in ses_intf_add if we
meet a zero component enclosure
Tested-by: Zeng Zhicong <timmyzeng(a)163.com>
Signed-off-by: Ding Hui <dinghui(a)sangfor.com.cn>
---
drivers/scsi/ses.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
index 0fc39224ce1e..63764fe0b34f 100644
--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -493,9 +493,6 @@ static int ses_enclosure_find_by_addr(struct enclosure_device *edev,
int i;
struct ses_component *scomp;
- if (!edev->component[0].scratch)
- return 0;
-
for (i = 0; i < edev->components; i++) {
scomp = edev->component[i].scratch;
if (scomp->addr != efd->addr)
@@ -581,8 +578,10 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
components++,
type_ptr[0],
name);
- else
+ else if (components < edev->components)
ecomp = &edev->component[components++];
+ else
+ ecomp = ERR_PTR(-EINVAL);
if (!IS_ERR(ecomp)) {
if (addl_desc_ptr)
@@ -747,9 +746,11 @@ static int ses_intf_add(struct device *cdev,
buf = NULL;
}
page2_not_supported:
- scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
- if (!scomp)
- goto err_free;
+ if (components > 0) {
+ scomp = kcalloc(components, sizeof(struct ses_component), GFP_KERNEL);
+ if (!scomp)
+ goto err_free;
+ }
edev = enclosure_register(cdev->parent, dev_name(&sdev->sdev_gendev),
components, &ses_enclosure_callbacks);
@@ -829,7 +830,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
kfree(ses_dev->page2);
kfree(ses_dev);
- kfree(edev->component[0].scratch);
+ if (edev->components > 0)
+ kfree(edev->component[0].scratch);
put_device(&edev->edev);
enclosure_unregister(edev);
--
2.17.1
1
0

22 Mar '21
bugfix for openEuler-20.03 @20210322
Guoqing Jiang (1):
md: add checkings before flush md_misc_wq
Jan Kara (14):
ext4: remove redundant sb checksum recomputation
ext4: standardize error message in ext4_protect_reserved_inode()
ext4: make ext4_abort() use __ext4_error()
ext4: move functions in super.c
ext4: simplify ext4 error translation
ext4: defer saving error info from atomic context
ext4: combine ext4_handle_error() and save_error_info()
ext4: drop sync argument of ext4_commit_super()
ext4: protect superblock modifications with a buffer lock
ext4: save error info to sb through journal if available
ext4: use sbi instead of EXT4_SB(sb) in ext4_update_super()
ext4: drop ext4_handle_dirty_super()
quota: Sanity-check quota file headers on load
quota: Fix memory leak when handling corrupted quota file
Jason Yan (3):
ext4: remove set but not used variable 'es'
ext4: remove set but not used variable 'es' in ext4_jbd2.c
scsi: check the whole result for reading write protect flag
Li Huafei (1):
perf/ftrace: Fix use-after-free in __ftrace_ops_list_func()
Mikulas Patocka (1):
dm: use noio when sending kobject event
Paolo Valente (5):
block, bfq: get extra ref to prevent a queue from being freed during a
group move
block, bfq: move forward the getting of an extra ref in bfq_bfqq_move
block, bfq: turn put_queue into release_process_ref in
__bfq_bic_change_cgroup
block, bfq: make reparent_leaf_entity actually work only on leaf
entities
block, bfq: invoke flush_idle_tree after reparent_active_queues in
pd_offline
Theodore Ts'o (4):
ext4: save the error code which triggered an ext4_error() in the
superblock
ext4: save all error info in save_error_info() and drop
ext4_set_errno()
ext4: don't try to processed freed blocks until mballoc is initialized
ext4: fix potential htree index checksum corruption
Ye Bin (2):
Revert "ext4: Protect superblock modifications with a buffer lock"
ext4: Fix bug on in ext4_es_cache_extent as ext4_split_extent_at
failed
Yu Kuai (1):
fs/xfs: fix time overflow
Zhang Ming (1):
arm64/mpam: fix a memleak in add_schema
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 1 +
block/bfq-cgroup.c | 102 ++++--
drivers/md/dm.c | 15 +-
drivers/md/md.c | 6 +-
drivers/scsi/sd.c | 6 +-
fs/ext4/balloc.c | 6 +-
fs/ext4/block_validity.c | 12 +-
fs/ext4/ext4.h | 102 ++++--
fs/ext4/ext4_jbd2.c | 31 +-
fs/ext4/ext4_jbd2.h | 5 -
fs/ext4/extents.c | 29 +-
fs/ext4/file.c | 4 +-
fs/ext4/ialloc.c | 11 +-
fs/ext4/indirect.c | 2 +-
fs/ext4/inline.c | 11 +-
fs/ext4/inode.c | 32 +-
fs/ext4/mballoc.c | 17 +-
fs/ext4/mmp.c | 13 +-
fs/ext4/move_extent.c | 4 +-
fs/ext4/namei.c | 31 +-
fs/ext4/resize.c | 16 +-
fs/ext4/super.c | 446 ++++++++++++++++----------
fs/ext4/xattr.c | 16 +-
fs/quota/quota_v2.c | 24 ++
fs/xfs/libxfs/xfs_format.h | 12 +
fs/xfs/xfs_iops.c | 17 +-
include/scsi/scsi.h | 13 +
kernel/events/core.c | 2 +
28 files changed, 629 insertions(+), 357 deletions(-)
--
2.25.1
1
33
Yufen Yu (2):
Revert "scsi: megaraid_sas: Replace undefined MFI_BIG_ENDIAN macro
with __BIG_ENDIAN_BITFIELD macro"
Revert "scsi: megaraid_sas: Set no_write_same only for Virtual Disk"
drivers/scsi/megaraid/megaraid_sas.h | 4 ++--
drivers/scsi/megaraid/megaraid_sas_base.c | 5 +----
drivers/scsi/megaraid/megaraid_sas_fusion.h | 21 +++++----------------
3 files changed, 8 insertions(+), 22 deletions(-)
--
2.25.4
2
3

19 Mar '21
raspberrypi inclusion
category: feature
bugzilla: 50432
------------------------------
This patch adjusts following fbdev related patches for
raspberry pi on non-Raspberry Pi platforms, using specific
config CONFIG_OPENEULER_RASPBERRYPI to distinguish them:
29df1382f6 Speed up console framebuffer imageblit function
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/video/fbdev/core/cfbimgblt.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/video/fbdev/core/cfbimgblt.c b/drivers/video/fbdev/core/cfbimgblt.c
index 436494fba15a..fb0e9cc0a0ba 100644
--- a/drivers/video/fbdev/core/cfbimgblt.c
+++ b/drivers/video/fbdev/core/cfbimgblt.c
@@ -266,7 +266,8 @@ static inline void fast_imageblit(const struct fb_image *image, struct fb_info *
s += spitch;
}
}
-
+
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/*
* Optimized fast_imageblit for bpp == 16. ppw = 2, bit_mask = 3 folded
* into the code, main loop unrolled.
@@ -393,6 +394,7 @@ static inline void fast_imageblit32(const struct fb_image *image,
s += spitch;
}
}
+#endif
void cfb_imageblit(struct fb_info *p, const struct fb_image *image)
{
@@ -425,7 +427,8 @@ void cfb_imageblit(struct fb_info *p, const struct fb_image *image)
fgcolor = image->fg_color;
bgcolor = image->bg_color;
}
-
+
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
if (!start_index && !pitch_index) {
if (bpp == 32)
fast_imageblit32(image, p, dst1, fgcolor,
@@ -441,6 +444,13 @@ void cfb_imageblit(struct fb_info *p, const struct fb_image *image)
bgcolor,
start_index, pitch_index);
} else
+#else
+ if (32 % bpp == 0 && !start_index && !pitch_index &&
+ ((width & (32/bpp-1)) == 0) &&
+ bpp >= 8 && bpp <= 32)
+ fast_imageblit(image, p, dst1, fgcolor, bgcolor);
+ else
+#endif
slow_imageblit(image, p, dst1, fgcolor, bgcolor,
start_index, pitch_index);
} else
--
2.20.1
2
1

[PATCH OLK-5.10] x86: config: disable CONFIG_BOOTPARAM_HOTPLUG_CPU0 by default
by Zheng Zengkai 18 Mar '21
by Zheng Zengkai 18 Mar '21
18 Mar '21
hulk inclusion
category: config
bugzilla: 50784
CVE: NA
---------------------------
Disable CONFIG_BOOTPARAM_HOTPLUG_CPU0 for x86 openeuler_defconfig
by default.
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/x86/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index c310e38fd6e7..addf175482e5 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -460,7 +460,7 @@ CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
CONFIG_HOTPLUG_CPU=y
-CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
+CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=y
--
2.20.1
2
1

[PATCH kernel-4.19] moduleparam: Save information about built-in modules in separate file
by Zhichang Yuan 18 Mar '21
by Zhichang Yuan 18 Mar '21
18 Mar '21
From: Alexey Gladkov <gladkov.alexey(a)gmail.com>
From: Alexey Gladkov <gladkov.alexey(a)gmail.com>
mainline inclusion
from mainline-v5.2-rc1
commit 898490c010b5d2e499e03b7e815fc214209ac583
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I390TB
CVE: NA
Backporting this patch is to fix an issue happened when the 5.10
kernel is building on openEuler 20.03 system.
In openEuler 20.03 system, some modules ware built-in kernel. But
in 5.10 kernel, those corresponding modules will be built as KO.
For built-in kernel, kmode 2.7+ will fetch the modinfo from
modules.builtin.modinfo which is only supported in kernel 5.2+.
In the rpmbuild process, kernel spec will call kmod to query
modinfo of the KO images. It will fail for 'file missing'.
With backporting the mainline commit below, kmod can fetch any module's
information from the corresponding module image or
modules.builtin.modinfo.
---------------------------
commit 898490c010b5d2e499e03b7e815fc214209ac583 upstream.
Problem:
When a kernel module is compiled as a separate module, some important
information about the kernel module is available via .modinfo section of
the module. In contrast, when the kernel module is compiled into the
kernel, that information is not available.
Information about built-in modules is necessary in the following cases:
1. When it is necessary to find out what additional parameters can be
passed to the kernel at boot time.
2. When you need to know which module names and their aliases are in
the kernel. This is very useful for creating an initrd image.
Proposal:
The proposed patch does not remove .modinfo section with module
information from the vmlinux at the build time and saves it into a
separate file after kernel linking. So, the kernel does not increase in
size and no additional information remains in it. Information is stored
in the same format as in the separate modules (null-terminated string
array). Because the .modinfo section is already exported with a separate
modules, we are not creating a new API.
It can be easily read in the userspace:
$ tr '\0' '\n' < modules.builtin.modinfo
ext4.softdep=pre: crc32c
ext4.license=GPL
ext4.description=Fourth Extended Filesystem
ext4.author=Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others
ext4.alias=fs-ext4
ext4.alias=ext3
ext4.alias=fs-ext3
ext4.alias=ext2
ext4.alias=fs-ext2
md_mod.alias=block-major-9-*
md_mod.alias=md
md_mod.description=MD RAID framework
md_mod.license=GPL
md_mod.parmtype=create_on_open:bool
md_mod.parmtype=start_dirty_degraded:int
...
Co-Developed-by: Gleb Fotengauer-Malinovskiy <glebfm(a)altlinux.org>
Signed-off-by: Gleb Fotengauer-Malinovskiy <glebfm(a)altlinux.org>
Signed-off-by: Alexey Gladkov <gladkov.alexey(a)gmail.com>
Acked-by: Jessica Yu <jeyu(a)kernel.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro(a)socionext.com>
Signed-off-by: Zhichang Yuan <erik.yuan(a)arm.com>
---
.gitignore | 1 +
Documentation/dontdiff | 1 +
Documentation/kbuild/kbuild.txt | 5 +++++
Makefile | 2 ++
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/module.h | 1 +
include/linux/moduleparam.h | 12 +++++-------
scripts/link-vmlinux.sh | 3 +++
8 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/.gitignore b/.gitignore
index 97ba6b79834c..e18bc625d330 100644
--- a/.gitignore
+++ b/.gitignore
@@ -57,6 +57,7 @@ modules.builtin
/vmlinuz
/System.map
/Module.markers
+/modules.builtin.modinfo
#
# RPM spec file (make rpm-pkg)
diff --git a/Documentation/dontdiff b/Documentation/dontdiff
index 2228fcc8e29f..3d4d5a402b8b 100644
--- a/Documentation/dontdiff
+++ b/Documentation/dontdiff
@@ -179,6 +179,7 @@ mktables
mktree
modpost
modules.builtin
+modules.builtin.modinfo
modules.order
modversions.h*
nconf
diff --git a/Documentation/kbuild/kbuild.txt b/Documentation/kbuild/kbuild.txt
index 8390c360d4b3..7f48e48f3fd2 100644
--- a/Documentation/kbuild/kbuild.txt
+++ b/Documentation/kbuild/kbuild.txt
@@ -11,6 +11,11 @@ modules.builtin
This file lists all modules that are built into the kernel. This is used
by modprobe to not fail when trying to load something builtin.
+modules.builtin.modinfo
+--------------------------------------------------
+This file contains modinfo from all modules that are built into the kernel.
+Unlike modinfo of a separate module, all fields are prefixed with module name.
+
Environment variables
diff --git a/Makefile b/Makefile
index 8f326d0652a7..f19982d051d0 100644
--- a/Makefile
+++ b/Makefile
@@ -1294,6 +1294,7 @@ _modinst_:
fi
@cp -f $(objtree)/modules.order $(MODLIB)/
@cp -f $(objtree)/modules.builtin $(MODLIB)/
+ @cp -f $(objtree)/modules.builtin.modinfo $(MODLIB)/
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
# This depmod is only for convenience to give the initial
@@ -1334,6 +1335,7 @@ endif # CONFIG_MODULES
# Directories & files removed with 'make clean'
CLEAN_DIRS += $(MODVERDIR) include/ksym
+CLEAN_FILES += modules.builtin.modinfo
# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config usr/include include/generated \
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index f65a924a75ab..701a1af4aa77 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -850,6 +850,7 @@
EXIT_CALL \
*(.discard) \
*(.discard.*) \
+ *(.modinfo) \
}
/**
diff --git a/include/linux/module.h b/include/linux/module.h
index 49942432f010..5056a346f69e 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -239,6 +239,7 @@ extern typeof(name) __mod_##type##__##name##_device_table \
#define MODULE_VERSION(_version) MODULE_INFO(version, _version)
#else
#define MODULE_VERSION(_version) \
+ MODULE_INFO(version, _version); \
static struct module_version_attribute ___modver_attr = { \
.mattr = { \
.attr = { \
diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
index ba36506db4fb..5ba250d9172a 100644
--- a/include/linux/moduleparam.h
+++ b/include/linux/moduleparam.h
@@ -10,23 +10,21 @@
module name. */
#ifdef MODULE
#define MODULE_PARAM_PREFIX /* empty */
+#define __MODULE_INFO_PREFIX /* empty */
#else
#define MODULE_PARAM_PREFIX KBUILD_MODNAME "."
+/* We cannot use MODULE_PARAM_PREFIX because some modules override it. */
+#define __MODULE_INFO_PREFIX KBUILD_MODNAME "."
#endif
/* Chosen so that structs with an unsigned long line up. */
#define MAX_PARAM_PREFIX_LEN (64 - sizeof(unsigned long))
-#ifdef MODULE
#define __MODULE_INFO(tag, name, info) \
static const char __UNIQUE_ID(name)[] \
__used __attribute__((section(".modinfo"), unused, aligned(1))) \
- = __stringify(tag) "=" info
-#else /* !MODULE */
-/* This struct is here for syntactic coherency, it is not used */
-#define __MODULE_INFO(tag, name, info) \
- struct __UNIQUE_ID(name) {}
-#endif
+ = __MODULE_INFO_PREFIX __stringify(tag) "=" info
+
#define __MODULE_PARM_TYPE(name, _type) \
__MODULE_INFO(parmtype, name##type, #name ":" _type)
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index c8cf45362bd6..c09e87e9c2b9 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -226,6 +226,9 @@ modpost_link vmlinux.o
# modpost vmlinux.o to check for section mismatches
${MAKE} -f "${srctree}/scripts/Makefile.modpost" vmlinux.o
+info MODINFO modules.builtin.modinfo
+${OBJCOPY} -j .modinfo -O binary vmlinux.o modules.builtin.modinfo
+
kallsymso=""
kallsyms_vmlinux=""
if [ -n "${CONFIG_KALLSYMS}" ]; then
--
2.23.0
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
3
2

[PATCH kernel-4.19] brcmfmac: Loading the correct firmware for brcm43456
by fangyafenqidai@163.com 18 Mar '21
by fangyafenqidai@163.com 18 Mar '21
18 Mar '21
From: Ondrej Jirman <megous(a)megous.com>
Commit e3062e05e1cfe378bb9b3fa0bef46711372bcf13 upstream.
SDIO based brcm43456 is currently misdetected as brcm43455 and the wrong
firmware name is used. Correct the detection and load the correct
firmware file. Chiprev for brcm43456 is "9".
Signed-off-by: Ondrej Jirman <megous(a)megous.com>
Signed-off-by: Kalle Valo <kvalo(a)codeaurora.org>
Signed-off-by: Fang Yafen <yafen(a)iscas.ac.cn>
---
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
index abaed2fa2def..18e9e52f8ee7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
@@ -621,6 +621,7 @@ BRCMF_FW_DEF(43430A0, "brcmfmac43430a0-sdio");
/* Note the names are not postfixed with a1 for backward compatibility */
BRCMF_FW_DEF(43430A1, "brcmfmac43430-sdio");
BRCMF_FW_DEF(43455, "brcmfmac43455-sdio");
+BRCMF_FW_DEF(43456, "brcmfmac43456-sdio");
BRCMF_FW_DEF(4354, "brcmfmac4354-sdio");
BRCMF_FW_DEF(4356, "brcmfmac4356-sdio");
BRCMF_FW_DEF(4373, "brcmfmac4373-sdio");
@@ -640,7 +641,8 @@ static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = {
BRCMF_FW_ENTRY(BRCM_CC_4339_CHIP_ID, 0xFFFFFFFF, 4339),
BRCMF_FW_ENTRY(BRCM_CC_43430_CHIP_ID, 0x00000001, 43430A0),
BRCMF_FW_ENTRY(BRCM_CC_43430_CHIP_ID, 0xFFFFFFFE, 43430A1),
- BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFFC0, 43455),
+ BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0x00000200, 43456),
+ BRCMF_FW_ENTRY(BRCM_CC_4345_CHIP_ID, 0xFFFFFDC0, 43455),
BRCMF_FW_ENTRY(BRCM_CC_4354_CHIP_ID, 0xFFFFFFFF, 4354),
BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356),
BRCMF_FW_ENTRY(CY_CC_4373_CHIP_ID, 0xFFFFFFFF, 4373)
--
2.27.0
2
1

[PATCH hulk-4.19-next] KEYS: Include pubring.gpg in system_certificates.o only if present
by Roberto Sassu 18 Mar '21
by Roberto Sassu 18 Mar '21
18 Mar '21
hulk inclusion
category: feature
feature: IMA digest lists
bugzilla: NA
https://gitee.com/openEuler/kernel/issues/I3916O
------------------------------------------------
This patch includes pubring.gpg in system_certificates.o only if it is
found in the certs directory of the source tree.
Signed-off-by: Roberto Sassu <roberto.sassu(a)huawei.com>
---
certs/Makefile | 13 +++++++------
certs/system_certificates.S | 2 +-
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/certs/Makefile b/certs/Makefile
index 5053e3c86c97..766c5d003093 100644
--- a/certs/Makefile
+++ b/certs/Makefile
@@ -4,12 +4,6 @@
#
obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
-ifdef CONFIG_PGP_PRELOAD_PUBLIC_KEYS
-ifneq ($(shell ls certs/pubring.gpg 2> /dev/null), certs/pubring.gpg)
-$(shell touch certs/pubring.gpg)
-endif
-$(obj)/system_certificates.o: certs/pubring.gpg
-endif
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o
ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),"")
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o
@@ -27,6 +21,13 @@ $(obj)/system_certificates.o: $(obj)/x509_certificate_list
# Cope with signing_key.x509 existing in $(srctree) not $(objtree)
AFLAGS_system_certificates.o := -I$(srctree)
+ifdef CONFIG_PGP_PRELOAD_PUBLIC_KEYS
+ifeq ($(shell ls $(srctree)/certs/pubring.gpg 2> /dev/null), $(srctree)/certs/pubring.gpg)
+AFLAGS_system_certificates.o += -DHAVE_PUBRING_GPG
+$(obj)/system_certificates.o: $(srctree)/certs/pubring.gpg
+endif
+endif
+
quiet_cmd_extract_certs = EXTRACT_CERTS $(patsubst "%",%,$(2))
cmd_extract_certs = scripts/extract-cert $(2) $@ || ( rm $@; exit 1)
diff --git a/certs/system_certificates.S b/certs/system_certificates.S
index bcb7c4b4cc36..e5f58711c38c 100644
--- a/certs/system_certificates.S
+++ b/certs/system_certificates.S
@@ -40,7 +40,7 @@ system_certificate_list_size:
.globl pgp_public_keys
pgp_public_keys:
__pgp_key_list_start:
-#ifdef CONFIG_PGP_PRELOAD_PUBLIC_KEYS
+#ifdef HAVE_PUBRING_GPG
.incbin "certs/pubring.gpg"
#endif
__pgp_key_list_end:
--
2.26.2
3
2

[PATCH kernel-4.19 01/52] moduleparam: Save information about built-in modules in separate file
by Yang Yingliang 18 Mar '21
by Yang Yingliang 18 Mar '21
18 Mar '21
From: Alexey Gladkov <gladkov.alexey(a)gmail.com>
mainline inclusion
from mainline-v5.2-rc1
commit 898490c010b5d2e499e03b7e815fc214209ac583
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I390TB
CVE: NA
Backporting this patch is to fix an issue happened when the 5.10
kernel is building on openEuler 20.03 system.
In openEuler 20.03 system, some modules ware built-in kernel. But
in 5.10 kernel, those corresponding modules will be built as KO.
For built-in kernel, kmode 2.7+ will fetch the modinfo from
modules.builtin.modinfo which is only supported in kernel 5.2+.
In the rpmbuild process, kernel spec will call kmod to query
modinfo of the KO images. It will fail for 'file missing'.
With backporting the mainline commit below, kmod can fetch any module's
information from the corresponding module image or
modules.builtin.modinfo.
---------------------------
Problem:
When a kernel module is compiled as a separate module, some important
information about the kernel module is available via .modinfo section of
the module. In contrast, when the kernel module is compiled into the
kernel, that information is not available.
Information about built-in modules is necessary in the following cases:
1. When it is necessary to find out what additional parameters can be
passed to the kernel at boot time.
2. When you need to know which module names and their aliases are in
the kernel. This is very useful for creating an initrd image.
Proposal:
The proposed patch does not remove .modinfo section with module
information from the vmlinux at the build time and saves it into a
separate file after kernel linking. So, the kernel does not increase in
size and no additional information remains in it. Information is stored
in the same format as in the separate modules (null-terminated string
array). Because the .modinfo section is already exported with a separate
modules, we are not creating a new API.
It can be easily read in the userspace:
$ tr '\0' '\n' < modules.builtin.modinfo
ext4.softdep=pre: crc32c
ext4.license=GPL
ext4.description=Fourth Extended Filesystem
ext4.author=Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others
ext4.alias=fs-ext4
ext4.alias=ext3
ext4.alias=fs-ext3
ext4.alias=ext2
ext4.alias=fs-ext2
md_mod.alias=block-major-9-*
md_mod.alias=md
md_mod.description=MD RAID framework
md_mod.license=GPL
md_mod.parmtype=create_on_open:bool
md_mod.parmtype=start_dirty_degraded:int
...
Co-Developed-by: Gleb Fotengauer-Malinovskiy <glebfm(a)altlinux.org>
Signed-off-by: Gleb Fotengauer-Malinovskiy <glebfm(a)altlinux.org>
Signed-off-by: Alexey Gladkov <gladkov.alexey(a)gmail.com>
Acked-by: Jessica Yu <jeyu(a)kernel.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro(a)socionext.com>
Signed-off-by: Zhichang Yuan <erik.yuan(a)arm.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.gitignore | 1 +
Documentation/dontdiff | 1 +
Documentation/kbuild/kbuild.txt | 5 +++++
Makefile | 2 ++
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/module.h | 1 +
include/linux/moduleparam.h | 12 +++++-------
scripts/link-vmlinux.sh | 3 +++
8 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/.gitignore b/.gitignore
index 2d498af502ff1..546cef8d9b8fe 100644
--- a/.gitignore
+++ b/.gitignore
@@ -57,6 +57,7 @@ modules.builtin
/vmlinuz
/System.map
/Module.markers
+/modules.builtin.modinfo
#
# RPM spec file (make rpm-pkg)
diff --git a/Documentation/dontdiff b/Documentation/dontdiff
index 2228fcc8e29f4..3d4d5a402b8be 100644
--- a/Documentation/dontdiff
+++ b/Documentation/dontdiff
@@ -179,6 +179,7 @@ mktables
mktree
modpost
modules.builtin
+modules.builtin.modinfo
modules.order
modversions.h*
nconf
diff --git a/Documentation/kbuild/kbuild.txt b/Documentation/kbuild/kbuild.txt
index 8390c360d4b35..7f48e48f3fd27 100644
--- a/Documentation/kbuild/kbuild.txt
+++ b/Documentation/kbuild/kbuild.txt
@@ -11,6 +11,11 @@ modules.builtin
This file lists all modules that are built into the kernel. This is used
by modprobe to not fail when trying to load something builtin.
+modules.builtin.modinfo
+--------------------------------------------------
+This file contains modinfo from all modules that are built into the kernel.
+Unlike modinfo of a separate module, all fields are prefixed with module name.
+
Environment variables
diff --git a/Makefile b/Makefile
index 040b3cd699b01..e01e33b35daaf 100644
--- a/Makefile
+++ b/Makefile
@@ -1288,6 +1288,7 @@ _modinst_:
fi
@cp -f $(objtree)/modules.order $(MODLIB)/
@cp -f $(objtree)/modules.builtin $(MODLIB)/
+ @cp -f $(objtree)/modules.builtin.modinfo $(MODLIB)/
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modinst
# This depmod is only for convenience to give the initial
@@ -1328,6 +1329,7 @@ endif # CONFIG_MODULES
# Directories & files removed with 'make clean'
CLEAN_DIRS += $(MODVERDIR) include/ksym
+CLEAN_FILES += modules.builtin.modinfo
# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config usr/include include/generated \
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 2d632a74cc5e9..0276b6950ae1d 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -855,6 +855,7 @@
EXIT_CALL \
*(.discard) \
*(.discard.*) \
+ *(.modinfo) \
}
/**
diff --git a/include/linux/module.h b/include/linux/module.h
index 49942432f0101..5056a346f69e9 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -239,6 +239,7 @@ extern typeof(name) __mod_##type##__##name##_device_table \
#define MODULE_VERSION(_version) MODULE_INFO(version, _version)
#else
#define MODULE_VERSION(_version) \
+ MODULE_INFO(version, _version); \
static struct module_version_attribute ___modver_attr = { \
.mattr = { \
.attr = { \
diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
index ba36506db4fb7..5ba250d9172ac 100644
--- a/include/linux/moduleparam.h
+++ b/include/linux/moduleparam.h
@@ -10,23 +10,21 @@
module name. */
#ifdef MODULE
#define MODULE_PARAM_PREFIX /* empty */
+#define __MODULE_INFO_PREFIX /* empty */
#else
#define MODULE_PARAM_PREFIX KBUILD_MODNAME "."
+/* We cannot use MODULE_PARAM_PREFIX because some modules override it. */
+#define __MODULE_INFO_PREFIX KBUILD_MODNAME "."
#endif
/* Chosen so that structs with an unsigned long line up. */
#define MAX_PARAM_PREFIX_LEN (64 - sizeof(unsigned long))
-#ifdef MODULE
#define __MODULE_INFO(tag, name, info) \
static const char __UNIQUE_ID(name)[] \
__used __attribute__((section(".modinfo"), unused, aligned(1))) \
- = __stringify(tag) "=" info
-#else /* !MODULE */
-/* This struct is here for syntactic coherency, it is not used */
-#define __MODULE_INFO(tag, name, info) \
- struct __UNIQUE_ID(name) {}
-#endif
+ = __MODULE_INFO_PREFIX __stringify(tag) "=" info
+
#define __MODULE_PARM_TYPE(name, _type) \
__MODULE_INFO(parmtype, name##type, #name ":" _type)
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index c8cf45362bd6f..c09e87e9c2b9f 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -226,6 +226,9 @@ modpost_link vmlinux.o
# modpost vmlinux.o to check for section mismatches
${MAKE} -f "${srctree}/scripts/Makefile.modpost" vmlinux.o
+info MODINFO modules.builtin.modinfo
+${OBJCOPY} -j .modinfo -O binary vmlinux.o modules.builtin.modinfo
+
kallsymso=""
kallsyms_vmlinux=""
if [ -n "${CONFIG_KALLSYMS}" ]; then
--
2.25.1
1
51
*** BLURB HERE ***
Andrew Murray (1):
arm64: Use correct ll/sc atomic constraints
Ard Biesheuvel (1):
crypto: tcrypt - avoid signed overflow in byte count
Chao Yu (1):
f2fs: fix to set/clear I_LINKABLE under i_lock
Chris Leech (2):
scsi: iscsi: Ensure sysfs attributes are limited to PAGE_SIZE
scsi: iscsi: Verify lengths on passthrough PDUs
Christian Gromm (1):
staging: most: sound: add sanity check for function argument
Claire Chang (1):
Bluetooth: hci_h5: Set HCI_QUIRK_SIMULTANEOUS_DISCOVERY for btrtl
Cornelia Huck (1):
virtio/s390: implement virtio-ccw revision 2 correctly
Di Zhu (1):
pktgen: fix misuse of BUG_ON() in pktgen_thread_worker()
Dinghao Liu (1):
staging: fwserial: Fix error handling in fwserial_create
Eckhart Mohr (1):
ALSA: hda/realtek: Add quirk for Clevo NH55RZQ
Fangrui Song (1):
x86/build: Treat R_386_PLT32 relocation as R_386_PC32
Geert Uytterhoeven (1):
dt-bindings: net: btusb: DT fix s/interrupt-name/interrupt-names/
Gopal Tiwari (1):
Bluetooth: Fix null pointer dereference in
amp_read_loc_assoc_final_data
Greg Kroah-Hartman (1):
Linux 4.19.179
Hans de Goede (3):
ASoC: Intel: bytcr_rt5640: Add quirk for the Estar Beauty HD MID 7316R
tablet
ASoC: Intel: bytcr_rt5640: Add quirk for the Voyo Winpad A15 tablet
ASoC: Intel: bytcr_rt5640: Add quirk for the Acer One S1002 tablet
Heiner Kallweit (1):
x86/reboot: Add Zotac ZBOX CI327 nano PCI reboot quirk
Jaegeuk Kim (1):
f2fs: handle unallocated section and zone on pinned/atgc
Jan Beulich (2):
Xen/gnttab: handle p2m update errors on a per-slot basis
xen-netback: respect gnttab_map_refs()'s return value
Jens Axboe (1):
swap: fix swapfile read/write offset
Jiri Slaby (1):
vt/consolemap: do font sum unsigned
Joe Perches (1):
sysfs: Add sysfs_emit and sysfs_emit_at to format sysfs output
John David Anglin (1):
parisc: Bump 64-bit IRQ stack size to 64 KB
Josef Bacik (1):
btrfs: fix error handling in commit_fs_roots
Lech Perczak (1):
net: usb: qmi_wwan: support ZTE P685M modem
Lee Duncan (1):
scsi: iscsi: Restrict sessions and handles to admin capabilities
Li Xinhai (1):
mm/hugetlb.c: fix unnecessary address expansion of pmd sharing
Marco Elver (1):
net: fix up truesize of cloned skb in skb_prepare_for_shift()
Marek Vasut (2):
rsi: Fix TX EAPOL packet handling against iwlwifi AP
rsi: Move card interrupt handling to RX thread
Miaoqing Pan (1):
ath10k: fix wmi mgmt tx queue full due to race condition
Mike Kravetz (1):
hugetlb: fix update_and_free_page contig page struct assumption
Nathan Chancellor (1):
MIPS: VDSO: Use CLANG_FLAGS instead of filtering out '--target='
Nicholas Kazlauskas (1):
drm/amd/display: Guard against NULL pointer deref when get_i2c_info
fails
Nirmoy Das (1):
PCI: Add a REBAR size quirk for Sapphire RX 5600 XT Pulse
Randy Dunlap (1):
JFS: more checks for invalid superblock
Ricardo Ribalda (1):
media: uvcvideo: Allow entities with no pads
Rokudo Yan (1):
zsmalloc: account the number of compacted pages correctly
Sabyrzhan Tasbolatov (1):
smackfs: restrict bytes count in smackfs write functions
Sakari Ailus (1):
media: v4l: ioctl: Fix memory leak in video_usercopy
Sean Young (1):
media: mceusb: sanity check for prescaler value
Sergey Senozhatsky (1):
drm/virtio: use kvmalloc for large allocations
Shaoying Xu (1):
arm64 module: set plt* section addresses to 0x0
Takashi Iwai (1):
ALSA: hda/realtek: Apply dual codec quirks for MSI Godlike X570 board
Tony Lindgren (1):
wlcore: Fix command execute failure 19 for wl12xx
Vladimir Oltean (1):
net: bridge: use switchdev for port flags set through sysfs too
Will Deacon (2):
arm64: Avoid redundant type conversions in xchg() and cmpxchg()
arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint
Yumei Huang (1):
xfs: Fix assert failure in xfs_setattr_size()
Zqiang (1):
udlfb: Fix memory leak in dlfb_usb_probe
.../devicetree/bindings/net/btusb.txt | 2 +-
Documentation/filesystems/sysfs.txt | 8 +-
Makefile | 2 +-
arch/arm/xen/p2m.c | 35 ++++-
arch/arm64/include/asm/atomic_ll_sc.h | 108 +++++++------
arch/arm64/include/asm/atomic_lse.h | 46 +++---
arch/arm64/include/asm/cmpxchg.h | 116 +++++++-------
arch/arm64/kernel/module.lds | 6 +-
arch/mips/vdso/Makefile | 5 +-
arch/parisc/kernel/irq.c | 4 +
arch/x86/kernel/module.c | 1 +
arch/x86/kernel/reboot.c | 9 ++
arch/x86/tools/relocs.c | 12 +-
arch/x86/xen/p2m.c | 44 +++++-
crypto/tcrypt.c | 20 +--
drivers/block/zram/zram_drv.c | 2 +-
drivers/bluetooth/hci_h5.c | 5 +
drivers/gpu/drm/amd/display/dc/core/dc_link.c | 5 +
drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +-
drivers/media/rc/mceusb.c | 9 +-
drivers/media/usb/uvc/uvc_driver.c | 7 +-
drivers/media/v4l2-core/v4l2-ioctl.c | 19 +--
drivers/net/usb/qmi_wwan.c | 1 +
drivers/net/wireless/ath/ath10k/mac.c | 15 +-
drivers/net/wireless/rsi/rsi_91x_hal.c | 3 +-
drivers/net/wireless/rsi/rsi_91x_sdio.c | 6 +-
drivers/net/wireless/rsi/rsi_91x_sdio_ops.c | 52 ++----
drivers/net/wireless/rsi/rsi_sdio.h | 8 +-
drivers/net/wireless/ti/wl12xx/main.c | 3 -
drivers/net/wireless/ti/wlcore/main.c | 15 +-
drivers/net/wireless/ti/wlcore/wlcore.h | 3 -
drivers/net/xen-netback/netback.c | 12 +-
drivers/pci/pci.c | 9 +-
drivers/s390/virtio/virtio_ccw.c | 4 +-
drivers/scsi/libiscsi.c | 148 +++++++++---------
drivers/scsi/scsi_transport_iscsi.c | 38 ++++-
drivers/staging/fwserial/fwserial.c | 2 +
drivers/staging/most/sound/sound.c | 2 +
drivers/tty/vt/consolemap.c | 2 +-
drivers/video/fbdev/udlfb.c | 1 +
fs/btrfs/transaction.c | 11 +-
fs/f2fs/namei.c | 8 +
fs/f2fs/segment.h | 4 +-
fs/jfs/jfs_filsys.h | 1 +
fs/jfs/jfs_mount.c | 10 ++
fs/sysfs/file.c | 55 +++++++
fs/xfs/xfs_iops.c | 2 +-
include/linux/sysfs.h | 16 ++
include/linux/zsmalloc.h | 2 +-
mm/hugetlb.c | 28 ++--
mm/page_io.c | 11 +-
mm/swapfile.c | 2 +-
mm/zsmalloc.c | 17 +-
net/bluetooth/amp.c | 3 +
net/bridge/br_sysfs_if.c | 9 +-
net/core/pktgen.c | 2 +-
net/core/skbuff.c | 14 +-
security/smack/smackfs.c | 21 ++-
sound/pci/hda/patch_realtek.c | 2 +
sound/soc/intel/boards/bytcr_rt5640.c | 37 +++++
60 files changed, 651 insertions(+), 399 deletions(-)
--
2.25.1
1
53
Adrian Hunter (1):
perf intel-pt: Fix missing CYC processing in PSB
Al Viro (1):
sparc32: fix a user-triggerable oops in clear_user()
Alain Volmat (1):
spi: stm32: properly handle 0 byte transfer
Alexander Lobakin (1):
MIPS: vmlinux.lds.S: add missing PAGE_ALIGNED_DATA() section
Alexander Usyskin (1):
watchdog: mei_wdt: request stop on unregister
Amey Narkhede (1):
staging: gdm724x: Fix DMA from stack
Andre Przywara (5):
arm64: dts: allwinner: A64: properly connect USB PHY to port 0
arm64: dts: allwinner: Drop non-removable from SoPine/LTS SD card
arm64: dts: allwinner: A64: Limit MMC2 bus frequency to 150 MHz
clk: sunxi-ng: h6: Fix CEC clock
clk: sunxi-ng: h6: Fix clock divider range on some clocks
Andrea Parri (Microsoft) (1):
Drivers: hv: vmbus: Avoid use-after-free in vmbus_onoffer_rescind()
Andrii Nakryiko (1):
bpf: Avoid warning when re-casting __bpf_call_base into
__bpf_call_base_args
Andy Shevchenko (1):
spi: pxa2xx: Fix the controller numbering for Wildcat Point
AngeloGioacchino Del Regno (1):
clk: qcom: gcc-msm8998: Fix Alpha PLL type for all GPLLs
Ansuel Smith (1):
PCI: qcom: Use PHY_REFCLK_USE_PAD only for ipq8064
Ard Biesheuvel (1):
crypto: arm64/sha - add missing module aliases
Arnaldo Carvalho de Melo (1):
perf tools: Fix DSO filtering when not finding a map for a sampled
address
Arnd Bergmann (1):
ARM: s3c: fix fiq for clang IAS
Aswath Govindraju (2):
misc: eeprom_93xx46: Fix module alias to enable module autoprobe
misc: eeprom_93xx46: Add module alias to avoid breaking support for
non device tree users
Ayush Sawal (1):
cxgb4/chtls/cxgbit: Keeping the max ofld immediate data size same in
cxgb4 and ulds
Bard Liao (1):
regmap: sdw: use _no_pm functions in regmap_read/write
Bartosz Golaszewski (1):
rtc: s5m: select REGMAP_I2C
Bob Pearson (2):
RDMA/rxe: Fix coding error in rxe_recv.c
RDMA/rxe: Correct skb on loopback path
Bob Peterson (1):
gfs2: Don't skip dlm unlock if glock has an lvb
Chao Yu (1):
f2fs: fix out-of-repair __setattr_copy()
Chen Yu (1):
cpufreq: intel_pstate: Get per-CPU max freq via MSR_HWP_CAPABILITIES
if available
Chen-Yu Tsai (1):
staging: rtl8723bs: wifi_regd.c: Fix incorrect number of regulatory
rules
Chenyang Li (1):
drm/amdgpu: Fix macro name _AMDGPU_TRACE_H_ in preprocessor if
condition
Christoph Schemmel (1):
NET: usb: qmi_wwan: Adding support for Cinterion MV31
Christophe JAILLET (9):
Bluetooth: btqcomsmd: Fix a resource leak in error handling paths in
the probe function
cpufreq: brcmstb-avs-cpufreq: Free resources in error path
cpufreq: brcmstb-avs-cpufreq: Fix resource leaks in ->remove()
media: vsp1: Fix an error handling path in the probe function
media: cx25821: Fix a bug when reallocating some dma memory
dmaengine: fsldma: Fix a resource leak in the remove function
dmaengine: fsldma: Fix a resource leak in an error handling path of
the probe function
dmaengine: owl-dma: Fix a resource leak in the remove function
mmc: usdhi6rol0: Fix a resource leak in the error handling path of the
probe
Christophe Leroy (3):
crypto: talitos - Work around SEC6 ERRATA (AES-CTR mode data size
error)
powerpc/47x: Disable 256k page size
powerpc/8xx: Fix software emulation interrupt
Christopher William Snowhill (1):
Bluetooth: Fix initializing response id after clearing struct
Chuhong Yuan (1):
net/mlx4_core: Add missed mlx4_free_cmd_mailbox()
Claudiu Beznea (1):
power: reset: at91-sama5d2_shdwc: fix wkupdbc mask
Colin Ian King (3):
mac80211: fix potential overflow when multiplying to u32 integers
b43: N-PHY: Fix the update of coef for the PHY revision >= 3case
fs/jfs: fix potential integer overflow on shift of a int
Corentin Labbe (3):
crypto: sun4i-ss - fix kmap usage
crypto: sun4i-ss - checking sg length is not sufficient
crypto: sun4i-ss - handle BigEndian for cipher
Cédric Le Goater (1):
KVM: PPC: Make the VMX instruction emulation routines static
Dan Carpenter (11):
gma500: clean up error handling in init
media: camss: missing error code in msm_video_register()
ASoC: cs42l56: fix up error handling in probe
drm/amdgpu: Prevent shift wrapping in amdgpu_read_mask()
mfd: wm831x-auxadc: Prevent use after free in wm831x_auxadc_read_irq()
Input: sur40 - fix an error code in sur40_probe()
Input: elo - fix an error code in elo_connect()
ocfs2: fix a use after free on error
Input: joydev - prevent potential read overflow in ioctl
USB: serial: mos7840: fix error code in mos7840_write()
USB: serial: mos7720: fix error code in mos7720_write()
Dan Williams (1):
libnvdimm/dimm: Avoid race between probe and available_slots_show()
Daniele Alessandrelli (1):
crypto: ecdh_helper - Ensure 'len >= secret.len' in decode_key()
David Howells (1):
certs: Fix blacklist flag type confusion
Dinghao Liu (3):
media: em28xx: Fix use-after-free in em28xx_alloc_urbs
media: media/pci: Fix memleak in empress_init
media: tm6000: Fix memleak in tm6000_start_stream
Edwin Peer (1):
bnxt_en: reverse order of TX disable and carrier off
Eric Biggers (1):
random: fix the RNDRESEEDCRNG ioctl
Eric Dumazet (2):
tcp: fix SO_RCVLOWAT related hangs under mem pressure
ipv6: icmp6: avoid indirect call for icmpv6_send()
Eric W. Biederman (1):
capabilities: Don't allow writing ambiguous v3 file capabilities
Fangrui Song (1):
module: Ignore _GLOBAL_OFFSET_TABLE_ when warning for undefined
symbols
Ferry Toth (1):
dmaengine: hsu: disable spurious interrupt
Filipe Manana (1):
btrfs: fix extent buffer leak on failure to copy root
Florian Fainelli (1):
ata: ahci_brcm: Add back regulators management
Frank Li (1):
mmc: sdhci-esdhc-imx: fix kernel panic when remove module
Frank Wunderlich (1):
dts64: mt7622: fix slow sd card access
Geert Uytterhoeven (1):
auxdisplay: ht16k33: Fix refresh rate handling
Greg Kroah-Hartman (1):
Linux 4.19.178
Guenter Roeck (3):
usb: dwc2: Do not update data length if it is 0 on inbound transfers
usb: dwc2: Abort transaction after errors with unknown reason
usb: dwc2: Make "trimming xfer length" a debug message
He Zhe (1):
arm64: uprobe: Return EOPNOTSUPP for AARCH32 instruction probing
Heiner Kallweit (2):
PCI: Align checking of syscall user config accessors
r8169: fix jumbo packet handling on RTL8168e
Ilya Lipnitskiy (1):
staging/mt7621-dma: mtk-hsdma.c->hsdma-mt7621.c
Jack Pham (1):
usb: gadget: u_audio: Free requests only after callback
Jacopo Mondi (1):
media: i2c: ov5670: Fix PIXEL_RATE minimum value
Jae Hyun Yoo (1):
soc: aspeed: snoop: Add clock control logic
James Bottomley (2):
tpm_tis: Fix check_locality for correct locality acquisition
tpm_tis: Clean up locality release
Jan Henrik Weinstock (1):
hwrng: timeriomem - Fix cooldown period calculation
Jan Kara (2):
bfq: Avoid false bfq queue merging
quota: Fix memory leak when handling corrupted quota file
Jarkko Sakkinen (1):
KEYS: trusted: Fix migratable=1 failing
Jason A. Donenfeld (6):
icmp: introduce helper for nat'd source address in network device
context
icmp: allow icmpv6_ndo_send to work with CONFIG_IPV6=n
gtp: use icmp_ndo_send helper
sunvnet: use icmp_ndo_send helper
xfrm: interface: use icmp_ndo_send helper
net: icmp: pass zeroed opts from icmp{,v6}_ndo_send before sending
Jason Gerecke (1):
HID: wacom: Ignore attempts to overwrite the touch_max value from HID
Jesper Dangaard Brouer (1):
bpf: Fix bpf_fib_lookup helper MTU check for SKB ctx
Jialin Zhang (1):
drm/gma500: Fix error return code in psb_driver_load()
Jiri Bohac (1):
pstore: Fix typo in compression option name
Jiri Kosina (1):
floppy: reintroduce O_NDELAY fix
Jiri Olsa (1):
crypto: bcm - Rename struct device_private to bcm_device_private
Joe Perches (1):
media: lmedm04: Fix misuse of comma
Johan Hovold (2):
USB: quirks: sort quirk entries
USB: serial: ftdi_sio: fix FTX sub-integer prescaler
John Wang (1):
ARM: dts: aspeed: Add LCLK to lpc-snoop
Jorgen Hansen (1):
VMCI: Use set_page_dirty_lock() when unregistering guest memory
Josef Bacik (2):
btrfs: abort the transaction if we fail to inc ref in btrfs_copy_root
btrfs: fix reloc root leak with 0 ref reloc roots on recovery
Juergen Gross (1):
xen/netback: fix spurious event detection for common event case
KarimAllah Ahmed (1):
fdt: Properly handle "no-map" field in the memory region
Konrad Dybcio (1):
drm/msm/dsi: Correct io_start for MSM8994 (20nm PHY)
Krzysztof Kozlowski (9):
ARM: dts: exynos: correct PMIC interrupt trigger level on Artik 5
ARM: dts: exynos: correct PMIC interrupt trigger level on Monk
ARM: dts: exynos: correct PMIC interrupt trigger level on Rinato
ARM: dts: exynos: correct PMIC interrupt trigger level on Spring
ARM: dts: exynos: correct PMIC interrupt trigger level on Arndale Octa
ARM: dts: exynos: correct PMIC interrupt trigger level on Odroid XU3
family
arm64: dts: exynos: correct PMIC interrupt trigger level on TM2
arm64: dts: exynos: correct PMIC interrupt trigger level on Espresso
regulator: s5m8767: Drop regulators OF node reference
Lakshmi Ramasubramanian (2):
ima: Free IMA measurement buffer on error
ima: Free IMA measurement buffer after kexec syscall
Laurent Pinchart (1):
media: uvcvideo: Accept invalid bFormatIndex and bFrameIndex values
Lech Perczak (1):
USB: serial: option: update interface mapping for ZTE P685M
Leon Romanovsky (1):
ipv6: silence compilation warning for non-IPV6 builds
Lijun Pan (2):
ibmvnic: add memory barrier to protect long term buffer
ibmvnic: skip send_request_unmap for timeout reset
Linus Lüssing (1):
ath9k: fix data bus crash when setting nf_override via debugfs
Luo Meng (1):
media: qm1d1c0042: fix error return code in qm1d1c0042_init()
Marc Zyngier (1):
arm64: Add missing ISB after invalidating TLB in __primary_switch
Marco Elver (1):
bpf_lru_list: Read double-checked variable once without lock
Marcos Paulo de Souza (1):
Input: i8042 - add ASUS Zenbook Flip to noselftest list
Mario Kleiner (1):
drm/amd/display: Fix 10/12 bpc setup in DCE output bit depth
reduction.
Martin Blumenstingl (1):
clk: meson: clk-pll: fix initializing the old rate (fallback) for a
PLL
Martin Kaiser (1):
staging: rtl8188eu: Add Edimax EW-7811UN V2 to device table
Mateusz Palczewski (3):
i40e: Add zero-initialization of AQ command structures
i40e: Fix overwriting flow control settings during driver loading
i40e: Fix add TC filter for IPv6
Maxim Kiselev (1):
gpio: pcf857x: Fix missing first interrupt
Maxime Chevallier (1):
net: mvneta: Remove per-cpu queue mapping for Armada 3700
Maxime Ripard (1):
i2c: brcmstb: Fix brcmstd_send_i2c_cmd condition
Maximilian Luz (1):
ACPICA: Fix exception code class checks
Miaohe Lin (3):
mm/memory.c: fix potential pte_unmap_unlock pte error
mm/hugetlb: fix potential double free in hugetlb_register_node() error
path
mm/rmap: fix potential pte_unmap on an not mapped pte
Mike Kravetz (1):
hugetlb: fix copy_huge_page_from_user contig page struct assumption
Mikulas Patocka (2):
blk-settings: align max_sectors on "logical_block_size" boundary
dm: fix deadlock when swapping to encrypted device
Muchun Song (1):
printk: fix deadlock when kernel panic
Namhyung Kim (1):
perf test: Fix unaligned access in sample parsing test
Nathan Chancellor (2):
MIPS: c-r4k: Fix section mismatch for loongson2_sc_init
MIPS: lantiq: Explicitly compare LTQ_EBU_PCC_ISTAT against 0
Nathan Lynch (1):
powerpc/pseries/dlpar: handle ibm, configure-connector delay status
NeilBrown (2):
seq_file: document how per-entry resources are managed.
x86: fix seq_file iteration for pat/memtype.c
Nick Desaulniers (1):
vmlinux.lds.h: add DWARF v5 sections
Nicolas Boichat (1):
of/fdt: Make sure no-map does not remove already reserved regions
Nikos Tsironis (7):
dm era: Recover committed writeset after crash
dm era: Verify the data block size hasn't changed
dm era: Fix bitset memory leaks
dm era: Use correct value size in equality function of writeset tree
dm era: Reinitialize bitset cache before digesting a new writeset
dm era: only resize metadata in preresume
dm era: Update in-core bitset after committing the metadata
Olivier Crête (1):
Input: xpad - add support for PowerA Enhanced Wired Controller for
Xbox Series X|S
Pan Bian (8):
Bluetooth: drop HCI device reference before return
Bluetooth: Put HCI device if inquiry procedure interrupts
memory: ti-aemif: Drop child node when jumping out loop
regulator: axp20x: Fix reference cout leak
spi: atmel: Put allocated master before return
isofs: release buffer head before return
mtd: spi-nor: hisi-sfc: Put child node np on error path
fs/affs: release old buffer head on error path
Paul Cercueil (2):
usb: musb: Fix runtime PM race in musb_queue_resume_work
seccomp: Add missing return in non-void function
Pavel Machek (1):
media: ipu3-cio2: Fix mbus_code processing in cio2_subdev_set_fmt()
PeiSen Hou (1):
ALSA: hda/realtek: modify EAPD in the ALC886
Peter Zijlstra (2):
jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked()
operations
locking/static_key: Fix false positive warnings on concurrent dec/inc
Pratyush Yadav (1):
spi: cadence-quadspi: Abort read if dummy cycles required are too many
Qinglang Miao (1):
ACPI: configfs: add missing check after
configfs_register_default_group()
Rafael J. Wysocki (1):
ACPI: property: Fix fwnode string properties matching
Rakesh Pillai (1):
ath10k: Fix error handling in case of CE pipe init failure
Randy Dunlap (4):
fbdev: aty: SPARC64 requires FB_ATY_CT
HID: core: detect and skip invalid inputs to snto32()
sparc64: only select COMPAT_BINFMT_ELF if BINFMT_ELF is set
scsi: bnx2fc: Fix Kconfig warning & CNIC build errors
Ricky Wu (1):
misc: rtsx: init of rts522a add OCP power off when no card is present
Rolf Eike Beer (2):
scripts: use pkg-config to locate libcrypto
scripts: set proper OpenSSL include dir also for sign-file
Rong Chen (1):
scripts/recordmcount.pl: support big endian for ARCH sh
Rosen Penev (2):
ARM: dts: armada388-helios4: assign pinctrl to LEDs
ARM: dts: armada388-helios4: assign pinctrl to each fan
Rustam Kovhaev (1):
ntfs: check for valid standard information attribute
Sabyrzhan Tasbolatov (1):
drivers/misc/vmw_vmci: restrict too big queue size in
qp_host_alloc_queue
Sameer Pujar (1):
arm64: tegra: Add power-domain for Tegra210 HDA
Sean Christopherson (1):
x86/reboot: Force all cpus to exit VMX root if VMX is supported
Sebastian Reichel (1):
ASoC: cpcap: fix microphone timeslot mask
Shay Drory (2):
IB/umad: Return EIO in case of when device disassociated
IB/umad: Return EPOLLERR in case of when device disassociated
Shyam Prasad N (1):
cifs: Set CIFS_MOUNT_USE_PREFIX_PATH flag on setting cifs_sb->prepath.
Shyam Sundar S K (4):
net: amd-xgbe: Reset the PHY rx data path when mailbox command timeout
net: amd-xgbe: Fix NETDEV WATCHDOG transmit queue timeout warning
net: amd-xgbe: Reset link when the link never comes back
net: amd-xgbe: Fix network fluctuations when using 1G BELFUSE SFP
Simon South (1):
pwm: rockchip: rockchip_pwm_probe(): Remove superfluous
clk_unprepare()
Slawomir Laba (1):
i40e: Fix flow for IPv6 next header (extension header)
Stefan Ursella (1):
usb: quirks: add quirk to start video capture on ELMO L-12F document
camera reliable
Steven Rostedt (VMware) (1):
tracepoint: Do not fail unregistering a probe due to memory failure
Sukadev Bhattiprolu (1):
ibmvnic: Set to CLOSED state even on error
Sumit Garg (1):
kdb: Make memory allocations more robust
Suzuki K Poulose (1):
arm64: Extend workaround for erratum 1024718 to all versions of
Cortex-A55
Sylwester Dziedziuch (1):
i40e: Fix VFs not created
Taehee Yoo (1):
vxlan: move debug check after netdev unregister
Takashi Iwai (1):
ALSA: usb-audio: Fix PCM buffer allocation in non-vmalloc mode
Takeshi Misawa (1):
net: qrtr: Fix memory leak in qrtr_tun_open
Takeshi Saito (1):
mmc: renesas_sdhi_internal_dmac: Fix DMA buffer alignment from 8 to
128-bytes
Theodore Ts'o (1):
ext4: fix potential htree index checksum corruption
Thinh Nguyen (2):
usb: dwc3: gadget: Fix setting of DEPCFG.bInterval_m1
usb: dwc3: gadget: Fix dep->interval for fullspeed interrupt
Tom Rix (3):
media: pxa_camera: declare variable when DEBUG is defined
jffs2: fix use after free in jffs2_sum_write_data()
clocksource/drivers/mxs_timer: Add missing semicolon when DEBUG is
defined
Tony Lindgren (1):
ARM: dts: Configure missing thermal interrupt for 4430
Uwe Kleine-König (1):
amba: Fix resource leak for drivers without .remove
Vincent Knecht (1):
arm64: dts: msm8916: Fix reserved and rfsa nodes unit address
Vladimir Murzin (1):
ARM: 9046/1: decompressor: Do not clear SCTLR.nTLSMD for ARMv7+ cores
Will McVicker (1):
HID: make arrays usage and value to be the same
Yi Chen (1):
f2fs: fix to avoid inconsistent quota data
Yishai Hadas (1):
RDMA/mlx5: Use the correct obj_id upon DEVX TIR creation
Yoshihiro Shimoda (1):
mfd: bd9571mwv: Use devm_mfd_add_devices()
Zhihao Cheng (1):
btrfs: clarify error returns values in __load_free_space_cache
jeffrey.lin (1):
Input: raydium_ts_i2c - do not send zero length
Documentation/filesystems/seq_file.txt | 6 +
Makefile | 2 +-
arch/arm/boot/compressed/head.S | 4 +-
arch/arm/boot/dts/armada-388-helios4.dts | 28 +++-
arch/arm/boot/dts/aspeed-g4.dtsi | 1 +
arch/arm/boot/dts/aspeed-g5.dtsi | 1 +
arch/arm/boot/dts/exynos3250-artik5.dtsi | 2 +-
arch/arm/boot/dts/exynos3250-monk.dts | 2 +-
arch/arm/boot/dts/exynos3250-rinato.dts | 2 +-
arch/arm/boot/dts/exynos5250-spring.dts | 2 +-
arch/arm/boot/dts/exynos5420-arndale-octa.dts | 2 +-
arch/arm/boot/dts/exynos5422-odroid-core.dtsi | 2 +-
arch/arm/boot/dts/omap443x.dtsi | 2 +
arch/arm64/Kconfig | 2 +-
.../dts/allwinner/sun50i-a64-pinebook.dts | 5 +-
.../boot/dts/allwinner/sun50i-a64-sopine.dtsi | 1 -
arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi | 6 +-
.../dts/exynos/exynos5433-tm2-common.dtsi | 2 +-
.../boot/dts/exynos/exynos7-espresso.dts | 2 +-
arch/arm64/boot/dts/mediatek/mt7622.dtsi | 2 +
arch/arm64/boot/dts/nvidia/tegra210.dtsi | 1 +
arch/arm64/boot/dts/qcom/msm8916.dtsi | 4 +-
arch/arm64/crypto/sha1-ce-glue.c | 1 +
arch/arm64/crypto/sha2-ce-glue.c | 2 +
arch/arm64/crypto/sha3-ce-glue.c | 4 +
arch/arm64/crypto/sha512-ce-glue.c | 2 +
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/head.S | 1 +
arch/arm64/kernel/probes/uprobes.c | 2 +-
arch/mips/kernel/vmlinux.lds.S | 1 +
arch/mips/lantiq/irq.c | 2 +-
arch/mips/mm/c-r4k.c | 2 +-
arch/powerpc/Kconfig | 2 +-
arch/powerpc/kernel/head_8xx.S | 2 +-
arch/powerpc/kvm/powerpc.c | 8 +-
arch/powerpc/platforms/pseries/dlpar.c | 7 +-
arch/sparc/Kconfig | 2 +-
arch/sparc/lib/memset.S | 1 +
arch/x86/kernel/reboot.c | 29 ++--
arch/x86/mm/pat.c | 3 +-
block/bfq-iosched.c | 1 +
block/blk-settings.c | 12 ++
certs/blacklist.c | 2 +-
crypto/ecdh_helper.c | 3 +
drivers/acpi/acpi_configfs.c | 7 +-
drivers/acpi/property.c | 44 ++++--
drivers/amba/bus.c | 20 +--
drivers/ata/ahci_brcm.c | 14 +-
drivers/auxdisplay/ht16k33.c | 3 +-
drivers/base/regmap/regmap-sdw.c | 4 +-
drivers/block/floppy.c | 27 ++--
drivers/bluetooth/btqcomsmd.c | 27 ++--
drivers/char/hw_random/timeriomem-rng.c | 2 +-
drivers/char/random.c | 2 +-
drivers/char/tpm/tpm_tis_core.c | 50 +------
drivers/clk/meson/clk-pll.c | 2 +-
drivers/clk/qcom/gcc-msm8998.c | 100 +++++++-------
drivers/clk/sunxi-ng/ccu-sun50i-h6.c | 10 +-
drivers/clocksource/mxs_timer.c | 5 +-
drivers/cpufreq/brcmstb-avs-cpufreq.c | 24 +++-
drivers/cpufreq/intel_pstate.c | 5 +-
drivers/crypto/bcm/cipher.c | 2 +-
drivers/crypto/bcm/cipher.h | 4 +-
drivers/crypto/bcm/util.c | 2 +-
drivers/crypto/chelsio/chtls/chtls_cm.h | 3 -
drivers/crypto/sunxi-ss/sun4i-ss-cipher.c | 125 ++++++++++--------
drivers/crypto/talitos.c | 28 ++--
drivers/crypto/talitos.h | 1 +
drivers/dma/fsldma.c | 6 +
drivers/dma/hsu/pci.c | 21 +--
drivers/dma/owl-dma.c | 1 +
drivers/gpio/gpio-pcf857x.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 2 +-
.../drm/amd/display/dc/dce/dce_transform.c | 8 +-
drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c | 22 +--
drivers/gpu/drm/gma500/psb_drv.c | 2 +
drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c | 2 +-
drivers/hid/hid-core.c | 9 +-
drivers/hid/wacom_wac.c | 7 +-
drivers/hv/channel_mgmt.c | 3 +-
drivers/i2c/busses/i2c-brcmstb.c | 2 +-
drivers/infiniband/core/user_mad.c | 17 ++-
drivers/infiniband/hw/mlx5/devx.c | 4 +-
drivers/infiniband/sw/rxe/rxe_net.c | 5 +
drivers/infiniband/sw/rxe/rxe_recv.c | 11 +-
drivers/input/joydev.c | 7 +-
drivers/input/joystick/xpad.c | 1 +
drivers/input/serio/i8042-x86ia64io.h | 4 +
drivers/input/touchscreen/elo.c | 4 +-
drivers/input/touchscreen/raydium_i2c_ts.c | 3 +-
drivers/input/touchscreen/sur40.c | 1 +
drivers/md/dm-core.h | 4 +
drivers/md/dm-crypt.c | 1 +
drivers/md/dm-era-target.c | 93 ++++++++-----
drivers/md/dm.c | 60 +++++++++
drivers/media/i2c/ov5670.c | 3 +-
drivers/media/pci/cx25821/cx25821-core.c | 4 +-
drivers/media/pci/intel/ipu3/ipu3-cio2.c | 2 +-
drivers/media/pci/saa7134/saa7134-empress.c | 5 +-
drivers/media/platform/pxa_camera.c | 3 +
.../media/platform/qcom/camss/camss-video.c | 1 +
drivers/media/platform/vsp1/vsp1_drv.c | 4 +-
drivers/media/tuners/qm1d1c0042.c | 4 +-
drivers/media/usb/dvb-usb-v2/lmedm04.c | 2 +-
drivers/media/usb/em28xx/em28xx-core.c | 6 +-
drivers/media/usb/tm6000/tm6000-dvb.c | 4 +
drivers/media/usb/uvc/uvc_v4l2.c | 18 +--
drivers/memory/ti-aemif.c | 8 +-
drivers/mfd/bd9571mwv.c | 6 +-
drivers/mfd/wm831x-auxadc.c | 3 +-
drivers/misc/aspeed-lpc-snoop.c | 30 ++++-
drivers/misc/cardreader/rts5227.c | 5 +
drivers/misc/eeprom/eeprom_93xx46.c | 1 +
drivers/misc/vmw_vmci/vmci_queue_pair.c | 5 +-
drivers/mmc/host/renesas_sdhi_internal_dmac.c | 4 +-
drivers/mmc/host/sdhci-esdhc-imx.c | 3 +-
drivers/mmc/host/usdhi6rol0.c | 4 +-
drivers/mtd/spi-nor/cadence-quadspi.c | 2 +-
drivers/mtd/spi-nor/hisi-sfc.c | 4 +-
drivers/net/ethernet/amd/xgbe/xgbe-common.h | 14 ++
drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 1 +
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 3 +-
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c | 39 +++++-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_uld.h | 3 +
drivers/net/ethernet/chelsio/cxgb4/sge.c | 11 +-
drivers/net/ethernet/ibm/ibmvnic.c | 16 ++-
drivers/net/ethernet/intel/i40e/i40e_main.c | 41 ++----
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 9 +-
drivers/net/ethernet/marvell/mvneta.c | 9 +-
.../ethernet/mellanox/mlx4/resource_tracker.c | 1 +
drivers/net/ethernet/realtek/r8169.c | 4 +-
drivers/net/ethernet/sun/sunvnet_common.c | 23 +---
drivers/net/gtp.c | 5 +-
drivers/net/usb/qmi_wwan.c | 1 +
drivers/net/vxlan.c | 11 +-
drivers/net/wireless/ath/ath10k/snoc.c | 5 +-
drivers/net/wireless/ath/ath9k/debug.c | 5 +-
drivers/net/wireless/broadcom/b43/phy_n.c | 2 +-
drivers/net/xen-netback/interface.c | 8 +-
drivers/nvdimm/dimm_devs.c | 18 ++-
drivers/of/fdt.c | 12 +-
drivers/pci/controller/dwc/pcie-qcom.c | 4 +-
drivers/pci/syscall.c | 10 +-
drivers/power/reset/at91-sama5d2_shdwc.c | 2 +-
drivers/pwm/pwm-rockchip.c | 1 -
drivers/regulator/axp20x-regulator.c | 7 +-
drivers/regulator/s5m8767.c | 8 +-
drivers/rtc/Kconfig | 1 +
drivers/scsi/bnx2fc/Kconfig | 1 +
drivers/spi/spi-atmel.c | 2 +-
drivers/spi/spi-pxa2xx-pci.c | 27 ++--
drivers/spi/spi-s3c24xx-fiq.S | 9 +-
drivers/spi/spi-stm32.c | 4 +
drivers/staging/gdm724x/gdm_usb.c | 10 +-
drivers/staging/mt7621-dma/Makefile | 2 +-
.../{mtk-hsdma.c => hsdma-mt7621.c} | 2 +-
drivers/staging/rtl8188eu/os_dep/usb_intf.c | 1 +
drivers/staging/rtl8723bs/os_dep/wifi_regd.c | 2 +-
drivers/target/iscsi/cxgbit/cxgbit_target.c | 3 +-
drivers/usb/core/quirks.c | 9 +-
drivers/usb/dwc2/hcd.c | 15 ++-
drivers/usb/dwc2/hcd_intr.c | 14 +-
drivers/usb/dwc3/gadget.c | 19 ++-
drivers/usb/gadget/function/u_audio.c | 17 ++-
drivers/usb/musb/musb_core.c | 31 +++--
drivers/usb/serial/ftdi_sio.c | 5 +-
drivers/usb/serial/mos7720.c | 4 +-
drivers/usb/serial/mos7840.c | 4 +-
drivers/usb/serial/option.c | 3 +-
drivers/video/fbdev/Kconfig | 2 +-
drivers/watchdog/mei_wdt.c | 1 +
fs/affs/namei.c | 4 +-
fs/btrfs/ctree.c | 7 +-
fs/btrfs/free-space-cache.c | 6 +-
fs/btrfs/relocation.c | 4 +-
fs/cifs/connect.c | 1 +
fs/ext4/namei.c | 7 +-
fs/f2fs/file.c | 7 +-
fs/f2fs/inline.c | 4 +
fs/gfs2/lock_dlm.c | 8 +-
fs/isofs/dir.c | 1 +
fs/isofs/namei.c | 1 +
fs/jffs2/summary.c | 3 +
fs/jfs/jfs_dmap.c | 2 +-
fs/ntfs/inode.c | 6 +
fs/ocfs2/cluster/heartbeat.c | 8 +-
fs/pstore/platform.c | 4 +-
fs/quota/quota_v2.c | 11 +-
include/acpi/acexcep.h | 10 +-
include/asm-generic/vmlinux.lds.h | 7 +-
include/linux/device-mapper.h | 5 +
include/linux/filter.h | 2 +-
include/linux/icmpv6.h | 48 ++++++-
include/linux/ipv6.h | 2 +-
include/linux/kexec.h | 5 +
include/linux/key.h | 1 +
include/linux/rmap.h | 3 +-
include/net/icmp.h | 10 ++
include/net/tcp.h | 9 +-
kernel/bpf/bpf_lru_list.c | 7 +-
kernel/debug/kdb/kdb_private.h | 2 +-
kernel/jump_label.c | 26 ++--
kernel/kexec_file.c | 5 +
kernel/module.c | 21 ++-
kernel/printk/printk_safe.c | 16 ++-
kernel/seccomp.c | 2 +
kernel/tracepoint.c | 80 ++++++++---
mm/hugetlb.c | 4 +-
mm/memory.c | 16 ++-
net/bluetooth/a2mp.c | 3 +-
net/bluetooth/hci_core.c | 6 +-
net/core/filter.c | 13 +-
net/ipv4/icmp.c | 34 +++++
net/ipv6/icmp.c | 19 +--
net/ipv6/ip6_icmp.c | 46 ++++++-
net/mac80211/mesh_hwmp.c | 2 +-
net/qrtr/tun.c | 12 +-
net/xfrm/xfrm_interface.c | 6 +-
scripts/Makefile | 9 +-
scripts/recordmcount.pl | 6 +-
security/commoncap.c | 12 +-
security/integrity/ima/ima_kexec.c | 3 +
security/integrity/ima/ima_mok.c | 5 +-
security/keys/key.c | 2 +
security/keys/trusted.c | 2 +-
sound/pci/hda/patch_realtek.c | 11 ++
sound/soc/codecs/cpcap.c | 12 +-
sound/soc/codecs/cs42l56.c | 3 +-
sound/usb/pcm.c | 2 +-
tools/perf/tests/sample-parsing.c | 2 +-
tools/perf/util/event.c | 2 +
.../util/intel-pt-decoder/intel-pt-decoder.c | 3 +
234 files changed, 1473 insertions(+), 693 deletions(-)
rename drivers/staging/mt7621-dma/{mtk-hsdma.c => hsdma-mt7621.c} (99%)
--
2.25.1
1
244

[PATCH kernel-4.19 01/10] ext4: Fix not report exception message when mount with errors=continue
by Yang Yingliang 18 Mar '21
by Yang Yingliang 18 Mar '21
18 Mar '21
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 50614
CVE: NA
-----------------------------------------------
Fixes: 24d1ffda34be("ext4: don't remount read-only with errors=continue on reboot")
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 15f8aeda9ee7f..18870ae874ab6 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -509,9 +509,12 @@ static void ext4_handle_error(struct super_block *sb)
if (test_opt(sb, WARN_ON_ERROR))
WARN_ON_ONCE(1);
- if (sb_rdonly(sb) || test_opt(sb, ERRORS_CONT))
+ if (sb_rdonly(sb))
return;
+ if (test_opt(sb, ERRORS_CONT))
+ goto out;
+
EXT4_SB(sb)->s_mount_flags |= EXT4_MF_FS_ABORTED;
if (journal)
jbd2_journal_abort(journal, -EIO);
@@ -533,6 +536,7 @@ static void ext4_handle_error(struct super_block *sb)
sb->s_id);
}
+out:
ext4_netlink_send_info(sb, 1);
}
--
2.25.1
1
9
From: Zhang Ming <154842638(a)qq.com>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BPPX
CVE: NA
-----------------------------------------------------------
The default branch in switch will not run at present, but there may be related extensions in the future, which may lead to memory leakage
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index aae585e7d7df..a4a298a455e0 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -78,6 +78,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
suffix = "";
break;
default:
+ kfree(s);
return -EINVAL;
}
--
2.17.1
3
2
From: Zhang Ming <154842638(a)qq.com>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BPPX
CVE: NA
--------------------------------------- --------------------
The default branch in switch will not run at present, but there may be related extensions in the future, which may lead to memory leakage
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index 4acf9234c3a5..b1d32d432556 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -74,6 +74,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
suffix = "";
break;
default:
+ kfree(s);
return -EINVAL;
}
--
2.17.1
3
2

17 Mar '21
From: Zhang Ming <154842638(a)qq.com>
[PATCH v1 openEuler-21.03]arch/arm64/kernel/mpam/mpam_ctrlmon.c: fix a bug of memory leakage
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BPPX
CVE: NA
-----------------------------------------------------------
The default branch in switch will not run at present, but there may be related extensions in the future, which may lead to memory leakage
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index 4acf9234c3a5..b1d32d432556 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -74,6 +74,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
suffix = "";
break;
default:
+ kfree(s);
return -EINVAL;
}
--
2.17.1
1
0
From: Zhang Ming <154842638(a)qq.com>
[PATCH kernel-4.19]arch/arm64/kernel/mpam/mpam_ctrlmon.c: fix a bug of memory leakage
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I3BPPX
CVE: NA
-----------------------------------------------------------
The default branch in switch will not run at present, but there may be related extensions in the future, which may lead to memory leakage
Signed-off-by: Zhang Ming <154842638(a)qq.com>
Reported-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Suggested-by: Jian Cheng <cj.chengjian(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index aae585e7d7df..a4a298a455e0 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -78,6 +78,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
suffix = "";
break;
default:
+ kfree(s);
return -EINVAL;
}
--
2.17.1
1
0
1
0
CVE for openEuler 20.09
Chris Leech (2):
scsi: iscsi: Ensure sysfs attributes are limited to PAGE_SIZE
scsi: iscsi: Verify lengths on passthrough PDUs
Jan Beulich (10):
Xen/x86: don't bail early from clear_foreign_p2m_mapping()
Xen/x86: also check kernel mapping in set_foreign_p2m_mapping()
Xen/gntdev: correct dev_bus_addr handling in gntdev_map_grant_pages()
Xen/gntdev: correct error checking in gntdev_map_grant_pages()
xen-blkback: don't "handle" error by BUG()
xen-netback: don't "handle" error by BUG()
xen-scsiback: don't "handle" error by BUG()
xen-blkback: fix error handling in xen_blkbk_map()
Xen/gnttab: handle p2m update errors on a per-slot basis
xen-netback: respect gnttab_map_refs()'s return value
Joe Perches (1):
sysfs: Add sysfs_emit and sysfs_emit_at to format sysfs output
Lee Duncan (1):
scsi: iscsi: Restrict sessions and handles to admin capabilities
Miklos Szeredi (6):
ovl: pass correct flags for opening real directory
ovl: switch to mounter creds in readdir
ovl: verify permissions in ovl_path_open()
ovl: call secutiry hook in ovl_real_ioctl()
ovl: check permission to open real file
ovl: do not fail because of O_NOATIME
Stefano Stabellini (1):
xen/arm: don't ignore return errors from set_phys_to_machine
Wenchao Hao (1):
virtio-blk: modernize sysfs attribute creation
Yang Yingliang (1):
sysfs: fix kabi broken when add sysfs_emit and sysfs_emit_at
Documentation/filesystems/sysfs.txt | 8 +-
arch/arm/xen/p2m.c | 33 ++++++-
arch/x86/xen/p2m.c | 59 ++++++++---
drivers/block/virtio_blk.c | 67 +++++++------
drivers/block/xen-blkback/blkback.c | 30 +++---
drivers/net/xen-netback/netback.c | 10 +-
drivers/scsi/libiscsi.c | 148 ++++++++++++++--------------
drivers/scsi/scsi_transport_iscsi.c | 38 +++++--
drivers/xen/gntdev.c | 37 +++----
drivers/xen/xen-scsiback.c | 4 +-
fs/overlayfs/file.c | 28 ++++--
fs/overlayfs/readdir.c | 37 +++++--
fs/overlayfs/util.c | 27 ++++-
fs/sysfs/file.c | 55 +++++++++++
include/linux/sysfs.h | 16 +++
include/xen/grant_table.h | 1 +
security/security.c | 1 +
17 files changed, 419 insertions(+), 180 deletions(-)
--
2.25.1
2
24
CVE-2021-27365
CVE-2021-27363
CVE-2021-27364
Chris Leech (2):
scsi: iscsi: Ensure sysfs attributes are limited to PAGE_SIZE
scsi: iscsi: Verify lengths on passthrough PDUs
Christoph Hellwig (1):
mm/swapfile.c: fix a comment in sys_swapon()
Darrick J. Wong (2):
mm: set S_SWAPFILE on blockdev swap devices
vfs: don't allow writes to swap files
Domenico Andreoli (1):
hibernate: Allow uswsusp to write to swap
Jan Beulich (10):
Xen/x86: don't bail early from clear_foreign_p2m_mapping()
Xen/x86: also check kernel mapping in set_foreign_p2m_mapping()
Xen/gntdev: correct dev_bus_addr handling in gntdev_map_grant_pages()
Xen/gntdev: correct error checking in gntdev_map_grant_pages()
xen-blkback: don't "handle" error by BUG()
xen-netback: don't "handle" error by BUG()
xen-scsiback: don't "handle" error by BUG()
xen-blkback: fix error handling in xen_blkbk_map()
Xen/gnttab: handle p2m update errors on a per-slot basis
xen-netback: respect gnttab_map_refs()'s return value
Joe Perches (1):
sysfs: Add sysfs_emit and sysfs_emit_at to format sysfs output
Lee Duncan (1):
scsi: iscsi: Restrict sessions and handles to admin capabilities
Miaohe Lin (1):
mm/swapfile.c: fix potential memory leak in sys_swapon
Miklos Szeredi (6):
ovl: pass correct flags for opening real directory
ovl: switch to mounter creds in readdir
ovl: verify permissions in ovl_path_open()
ovl: call secutiry hook in ovl_real_ioctl()
ovl: check permission to open real file
ovl: do not fail because of O_NOATIME
Naohiro Aota (1):
mm/swapfile.c: move inode_lock out of claim_swapfile
Stefano Stabellini (1):
xen/arm: don't ignore return errors from set_phys_to_machine
Wenchao Hao (2):
nvme: register ns_id attributes as default sysfs groups
virtio-blk: modernize sysfs attribute creation
Yang Yingliang (1):
sysfs: fix kabi broken when add sysfs_emit and sysfs_emit_at
Ye Bin (1):
ext4: Fix not report exception message when mount with errors=continue
zhangyi (F) (1):
block_dump: remove block_dump feature when dirting inode
Documentation/filesystems/sysfs.txt | 8 +-
arch/arm/xen/p2m.c | 33 ++++++-
arch/x86/xen/p2m.c | 59 ++++++++---
drivers/block/virtio_blk.c | 67 +++++++------
drivers/block/xen-blkback/blkback.c | 30 +++---
drivers/net/xen-netback/netback.c | 10 +-
drivers/nvme/host/core.c | 20 ++--
drivers/nvme/host/lightnvm.c | 105 +++++++++-----------
drivers/nvme/host/multipath.c | 11 +--
drivers/nvme/host/nvme.h | 10 +-
drivers/scsi/libiscsi.c | 148 ++++++++++++++--------------
drivers/scsi/scsi_transport_iscsi.c | 38 +++++--
drivers/xen/gntdev.c | 37 +++----
drivers/xen/xen-scsiback.c | 4 +-
fs/block_dev.c | 5 +
fs/ext4/super.c | 6 +-
fs/fs-writeback.c | 25 -----
fs/overlayfs/file.c | 28 ++++--
fs/overlayfs/readdir.c | 37 +++++--
fs/overlayfs/util.c | 27 ++++-
fs/sysfs/file.c | 55 +++++++++++
include/linux/fs.h | 11 +++
include/linux/sysfs.h | 16 +++
include/xen/grant_table.h | 1 +
mm/filemap.c | 3 +
mm/memory.c | 4 +
mm/mmap.c | 8 +-
mm/swapfile.c | 72 ++++++++------
security/security.c | 1 +
29 files changed, 552 insertions(+), 327 deletions(-)
--
2.25.1
1
32

12 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
------------------------------
One cpu in PARK state could not come up in this case:
CPU 0 | CPU 1
boot_secondary(cpu 1) |
--> write_park_exit(cpu 1) |
| cpu uping from PARK
| ...
uninstall_cpu_park() |
--> memset to 0 park text |
| ...
| Exception in memory !!
wait for cpu up |
Cpu 1 uping from PARK may trap into exception while cpu 0
clear cpu 1's park text memory.
This uninstall_cpu_park should be after waiting for cpu up.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/kernel/smp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index d7b750a..fb6007d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -300,15 +300,15 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
-#ifdef CONFIG_ARM64_CPU_PARK
- uninstall_cpu_park(cpu);
-#endif
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
+#ifdef CONFIG_ARM64_CPU_PARK
+ uninstall_cpu_park(cpu);
+#endif
if (cpu_online(cpu))
return 0;
--
2.9.5
2
1

12 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
------------------------------
One cpu in PARK state could not come up in this case:
CPU 0 | CPU 1
boot_secondary(cpu 1) |
--> write_park_exit(cpu 1) |
| cpu uping from PARK
| ...
uninstall_cpu_park() |
--> memset to 0 park text |
| ...
| Exception in memory !!
wait for cpu up |
Cpu 1 uping from PARK may trap into exception while cpu 0
clear cpu 1's park text memory.
This uninstall_cpu_park should be after waiting for cpu up.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/kernel/smp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index d7b750a..fb6007d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -300,15 +300,15 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
-#ifdef CONFIG_ARM64_CPU_PARK
- uninstall_cpu_park(cpu);
-#endif
/*
* CPU was successfully started, wait for it to come online or
* time out.
*/
wait_for_completion_timeout(&cpu_running,
msecs_to_jiffies(5000));
+#ifdef CONFIG_ARM64_CPU_PARK
+ uninstall_cpu_park(cpu);
+#endif
if (cpu_online(cpu))
return 0;
--
2.9.5
1
0

[PATCH openEuler-21.03 v2] park: Reserve park mem before kexec reserved
by sangyan@huawei.com 12 Mar '21
by sangyan@huawei.com 12 Mar '21
12 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
------------------------------
reserve_crashkernel or reserve_quick_kexec may find one sutiable
memory region and reserves it, which address of the region is
not fixed.
As a result, cpu park reserves memory could be failed while
specified address used by crashkernel or quickkexec.
So, move reserve_park_mem before reserve_crashkernel and
reserve_quick_kexec.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
Reviewed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
---
arch/arm64/mm/init.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b343744..dbcc801 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -497,16 +497,25 @@ void __init arm64_memblock_init(void)
else
arm64_dma32_phys_limit = PHYS_MASK + 1;
+ /*
+ * Reserve park memory before crashkernel and quick kexec.
+ * Because park memory must be specified by address, but
+ * crashkernel and quickkexec may be specified by memory length,
+ * then find one sutiable memory region to reserve.
+ *
+ * So reserve park memory firstly is better, but it may cause
+ * crashkernel or quickkexec reserving failed.
+ */
+#ifdef CONFIG_ARM64_CPU_PARK
+ reserve_park_mem();
+#endif
+
reserve_crashkernel();
#ifdef CONFIG_QUICK_KEXEC
reserve_quick_kexec();
#endif
-#ifdef CONFIG_ARM64_CPU_PARK
- reserve_park_mem();
-#endif
-
reserve_pin_memory_res();
reserve_elfcorehdr();
--
2.9.5
1
0

12 Mar '21
From: ZhuLing <zhuling8(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: NA
Register pmem in arm64:
Use memmap(memmap=nn[KMG]!ss[KMG]) reserve memory and
e820(driver/nvdimm/e820.c) function to register persistent
memory in arm64. when the kernel restart or update, the data
in PMEM will not be lost and can be loaded faster. this is a
general features.
driver/nvdimm/e820.c:
The function of this file is scan "iomem_resource" and take
advantage of nvdimm resource discovery mechanism by registering
a resource named "Persistent Memory (legacy)", this function
doesn't depend on architecture.
We will push the feature to linux kernel community and discuss to
modify the file name. because people have a mistaken notion that
the e820.c is depend on x86.
If you want use this features, you need do as follows:
1.Reserve memory: add memmap to reserve memory in grub.cfg
memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000.
2.Insmod nd_e820.ko: modprobe nd_e820.
3.Check pmem device in /dev exp: /dev/pmem0
Signed-off-by: ZhuLing <zhuling8(a)huawei.com>
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
---
arch/arm64/Kconfig | 21 +++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pmem.c | 35 ++++++++++++++
arch/arm64/kernel/setup.c | 10 ++++
arch/arm64/mm/init.c | 97 ++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Kconfig | 5 ++
drivers/nvdimm/Makefile | 2 +-
7 files changed, 170 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/pmem.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c451137ab..326f26d40 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -360,6 +360,27 @@ config ARM64_CPU_PARK
config ARCH_HAS_CPU_RELAX
def_bool y
+config ARM64_PMEM_RESERVE
+ bool "Reserve memory for persistent storage"
+ default n
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
+ memory for persistent storage.
+
+ Say y here to enable this feature.
+
+config ARM64_PMEM_LEGACY_DEVICE
+ bool "Create persistent storage"
+ depends on BLK_DEV
+ depends on LIBNVDIMM
+ select ARM64_PMEM_RESERVE
+ help
+ Use reserved memory for persistent storage when the kernel
+ restart or update. the data in PMEM will not be lost and
+ can be loaded faster.
+
+ Say y if unsure.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 967cb3c6d..be996f3c1 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-$(CONFIG_MPAM) += mpam/
+obj-$(CONFIG_ARM64_PMEM_LEGACY_DEVICE) += pmem.o
obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
diff --git a/arch/arm64/kernel/pmem.c b/arch/arm64/kernel/pmem.c
new file mode 100644
index 000000000..16eaf706f
--- /dev/null
+++ b/arch/arm64/kernel/pmem.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Derived from x86 and arm64 implement PMEM.
+ */
+#include <linux/platform_device.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+
+static int found(struct resource *res, void *data)
+{
+ return 1;
+}
+
+static int __init register_e820_pmem(void)
+{
+ struct platform_device *pdev;
+ int rc;
+
+ rc = walk_iomem_res_desc(IORES_DESC_PERSISTENT_MEMORY_LEGACY,
+ IORESOURCE_MEM, 0, -1, NULL, found);
+ if (rc <= 0)
+ return 0;
+
+ /*
+ * See drivers/nvdimm/e820.c for the implementation, this is
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+device_initcall(register_e820_pmem);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 5e282d31a..84c71c88d 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -57,6 +57,10 @@
static int num_standard_resources;
static struct resource *standard_resources;
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+extern struct resource pmem_res;
+#endif
+
phys_addr_t __fdt_pointer __initdata;
/*
@@ -270,6 +274,12 @@ static void __init request_standard_resources(void)
request_resource(res, &pin_memory_resource);
#endif
}
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ if (pmem_res.end && pmem_res.start)
+ request_resource(&iomem_resource, &pmem_res);
+#endif
+
}
static int __init reserve_memblock_reserved_regions(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b3437440d..f22faea1a 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,6 +66,18 @@ EXPORT_SYMBOL(memstart_addr);
phys_addr_t arm64_dma_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+static unsigned long long pmem_size, pmem_start;
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+struct resource pmem_res = {
+ .name = "Persistent Memory (legacy)",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_PERSISTENT_MEMORY_LEGACY
+};
+#endif
+
#ifndef CONFIG_KEXEC_CORE
static void __init reserve_crashkernel(void)
{
@@ -378,6 +390,87 @@ static int __init reserve_park_mem(void)
}
#endif
+static int __init is_mem_valid(unsigned long long mem_size, unsigned long long mem_start)
+{
+ if (!memblock_is_region_memory(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region is not memory!\n");
+ return -EINVAL;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region overlaps reserved memory!\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve mem: base address is not 2MB aligned!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init parse_memmap_one(char *p)
+{
+ char *oldp;
+ phys_addr_t start_at, mem_size;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ if (!mem_size)
+ return -EINVAL;
+
+ mem_size = PAGE_ALIGN(mem_size);
+
+ if (*p == '!') {
+ start_at = memparse(p+1, &p);
+
+ if (is_mem_valid(mem_size, start_at) != 0)
+ return -EINVAL;
+
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ start_at, start_at + mem_size, mem_size >> 20);
+ pmem_start = start_at;
+ pmem_size = mem_size;
+ } else
+ pr_info("Unrecognized memmap option, please check the parameter.\n");
+
+ return *p == '\0' ? 0 : -EINVAL;
+}
+
+static int __init parse_memmap_opt(char *str)
+{
+ while (str) {
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ parse_memmap_one(str);
+ str = k;
+ }
+
+ return 0;
+}
+early_param("memmap", parse_memmap_opt);
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+static void __init reserve_pmem(void)
+{
+ memblock_remove(pmem_start, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_start, pmem_start + pmem_size, pmem_size >> 20);
+ pmem_res.start = pmem_start;
+ pmem_res.end = pmem_start + pmem_size - 1;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -511,6 +604,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig
index b7d1eb38b..ce4de7526 100644
--- a/drivers/nvdimm/Kconfig
+++ b/drivers/nvdimm/Kconfig
@@ -132,3 +132,8 @@ config NVDIMM_TEST_BUILD
infrastructure.
endif
+
+config PMEM_LEGACY
+ tristate "Pmem_legacy"
+ select X86_PMEM_LEGACY if X86
+ select ARM64_PMEM_LEGACY_DEVICE if ARM64
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index 29203f3d3..6f8dc9242 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_LIBNVDIMM) += libnvdimm.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
-obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
+obj-$(CONFIG_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o nd_virtio.o
--
2.19.1
1
1
raspberrypi inclusion
category: feature
bugzilla: 50432
------------------------------
This patch adjusts following arch arm related patches for
raspberry pi on non-Raspberry Pi platforms, using specific
config CONFIG_OPENEULER_RASPBERRYPI to distinguish them:
d5c13edbd8 Improve __copy_to_user and __copy_from_user performance
97145d2a6a Update vfpmodule.c
bffc462cbd Main bcm2708/bcm2709 linux port
588cfce788 cache: export clean and invalidate
41cd350cca ARM: Activate FIQs to avoid __irq_startup warnings
90607c7aaf ARM: proc-v7: Force misalignment of early stmia
ee46d0fadf reboot: Use power off rather than busy spinning when
halt is requested
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm/include/asm/string.h | 2 ++
arch/arm/include/asm/uaccess.h | 2 ++
arch/arm/kernel/fiq.c | 4 ++++
arch/arm/kernel/reboot.c | 6 ++++++
arch/arm/lib/Makefile | 15 +++++++++++++++
arch/arm/lib/copy_from_user.S | 6 ++++++
arch/arm/lib/uaccess_with_memcpy.c | 22 +++++++++++++++++++++-
arch/arm/mm/cache-v6.S | 8 ++++++++
arch/arm/mm/cache-v7.S | 8 ++++++++
arch/arm/mm/proc-v6.S | 8 ++++++++
arch/arm/mm/proc-v7.S | 8 ++++++++
arch/arm/vfp/vfpmodule.c | 26 ++++++++++++++++++++++++++
12 files changed, 114 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index c22d5869e7b6..3c4ae6b3c3a6 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -45,10 +45,12 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
return __memset64(p, v, n * 8, v >> 32);
}
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
#ifdef CONFIG_BCM2835_FAST_MEMCPY
#define __HAVE_ARCH_MEMCMP
extern int memcmp(const void *, const void *, size_t);
#endif
+#endif
/*
* For files that are not instrumented (e.g. mm/slub.c) we
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index f24d3fabccd6..0d8cba7a9dde 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -516,8 +516,10 @@ do { \
extern unsigned long __must_check
arm_copy_from_user(void *to, const void __user *from, unsigned long n);
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
extern unsigned long __must_check
__copy_from_user_std(void *to, const void __user *from, unsigned long n);
+#endif
static inline unsigned long __must_check
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
diff --git a/arch/arm/kernel/fiq.c b/arch/arm/kernel/fiq.c
index c3fe7d3cf482..8116f1e52e1b 100644
--- a/arch/arm/kernel/fiq.c
+++ b/arch/arm/kernel/fiq.c
@@ -56,7 +56,9 @@
static unsigned long dfl_fiq_insn;
static struct pt_regs dfl_fiq_regs;
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
extern int irq_activate(struct irq_desc *desc);
+#endif
/* Default reacquire function
* - we always relinquish FIQ control
@@ -142,8 +144,10 @@ static int fiq_start;
void enable_fiq(int fiq)
{
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
struct irq_desc *desc = irq_to_desc(fiq + fiq_start);
irq_activate(desc);
+#endif
enable_irq(fiq + fiq_start);
}
diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c
index 63373adab475..ffb170568dcc 100644
--- a/arch/arm/kernel/reboot.c
+++ b/arch/arm/kernel/reboot.c
@@ -102,7 +102,13 @@ void machine_shutdown(void)
*/
void machine_halt(void)
{
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
machine_power_off();
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ local_irq_disable();
+ smp_send_stop();
+ while (1);
+#endif
}
/*
diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
index 8271cde92dec..63820a487d4b 100644
--- a/arch/arm/lib/Makefile
+++ b/arch/arm/lib/Makefile
@@ -5,6 +5,7 @@
# Copyright (C) 1995-2000 Russell King
#
+ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y)
lib-y := changebit.o csumipv6.o csumpartial.o \
csumpartialcopy.o csumpartialcopyuser.o clearbit.o \
delay.o delay-loop.o findbit.o memchr.o \
@@ -15,6 +16,18 @@ lib-y := changebit.o csumipv6.o csumpartial.o \
ucmpdi2.o lib1funcs.o div64.o \
io-readsb.o io-writesb.o io-readsl.o io-writesl.o \
call_with_stack.o bswapsdi2.o
+else
+lib-y := changebit.o csumipv6.o csumpartial.o \
+ csumpartialcopy.o csumpartialcopyuser.o clearbit.o \
+ delay.o delay-loop.o findbit.o memchr.o memcpy.o \
+ memmove.o memset.o setbit.o \
+ strchr.o strrchr.o \
+ testchangebit.o testclearbit.o testsetbit.o \
+ ashldi3.o ashrdi3.o lshrdi3.o muldi3.o \
+ ucmpdi2.o lib1funcs.o div64.o \
+ io-readsb.o io-writesb.o io-readsl.o io-writesl.o \
+ call_with_stack.o bswapsdi2.o
+endif
mmu-y := clear_user.o copy_page.o getuser.o putuser.o \
copy_from_user.o copy_to_user.o
@@ -25,6 +38,7 @@ else
lib-y += backtrace.o
endif
+ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y)
# Choose optimised implementations for Raspberry Pi
ifeq ($(CONFIG_BCM2835_FAST_MEMCPY),y)
CFLAGS_uaccess_with_memcpy.o += -DCOPY_FROM_USER_THRESHOLD=1600
@@ -34,6 +48,7 @@ ifeq ($(CONFIG_BCM2835_FAST_MEMCPY),y)
else
lib-y += memcpy.o memmove.o memset.o
endif
+endif
# using lib_ here won't override already available weak symbols
obj-$(CONFIG_UACCESS_WITH_MEMCPY) += uaccess_with_memcpy.o
diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S
index ab7bf28dbec0..3f83d8b18b0d 100644
--- a/arch/arm/lib/copy_from_user.S
+++ b/arch/arm/lib/copy_from_user.S
@@ -107,8 +107,12 @@
.text
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENTRY(__copy_from_user_std)
WEAK(arm_copy_from_user)
+#else
+ENTRY(arm_copy_from_user)
+#endif
#ifdef CONFIG_CPU_SPECTRE
get_thread_info r3
ldr r3, [r3, #TI_ADDR_LIMIT]
@@ -118,7 +122,9 @@ WEAK(arm_copy_from_user)
#include "copy_template.S"
ENDPROC(arm_copy_from_user)
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENDPROC(__copy_from_user_std)
+#endif
.pushsection .text.fixup,"ax"
.align 0
diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c
index b483e5713039..ab15ed7f599a 100644
--- a/arch/arm/lib/uaccess_with_memcpy.c
+++ b/arch/arm/lib/uaccess_with_memcpy.c
@@ -19,6 +19,7 @@
#include <asm/current.h>
#include <asm/page.h>
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
#ifndef COPY_FROM_USER_THRESHOLD
#define COPY_FROM_USER_THRESHOLD 64
#endif
@@ -26,6 +27,7 @@
#ifndef COPY_TO_USER_THRESHOLD
#define COPY_TO_USER_THRESHOLD 64
#endif
+#endif
static int
pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
@@ -51,7 +53,11 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
return 0;
pmd = pmd_offset(pud, addr);
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
if (unlikely(pmd_none(*pmd) || pmd_bad(*pmd)))
+#else
+ if (unlikely(pmd_none(*pmd)))
+#endif
return 0;
/*
@@ -94,6 +100,7 @@ pin_page_for_write(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
return 1;
}
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
static int
pin_page_for_read(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
{
@@ -132,8 +139,13 @@ pin_page_for_read(const void __user *_addr, pte_t **ptep, spinlock_t **ptlp)
return 1;
}
+#endif
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
unsigned long noinline
+#else
+static unsigned long noinline
+#endif
__copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
{
unsigned long ua_flags;
@@ -186,6 +198,7 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
return n;
}
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
unsigned long noinline
__copy_from_user_memcpy(void *to, const void __user *from, unsigned long n)
{
@@ -236,6 +249,7 @@ __copy_from_user_memcpy(void *to, const void __user *from, unsigned long n)
out:
return n;
}
+#endif
unsigned long
arm_copy_to_user(void __user *to, const void *from, unsigned long n)
@@ -247,7 +261,11 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n)
* With frame pointer disabled, tail call optimization kicks in
* as well making this test almost invisible.
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
if (n < COPY_TO_USER_THRESHOLD) {
+#else
+ if (n < 64) {
+#endif
unsigned long ua_flags = uaccess_save_and_enable();
n = __copy_to_user_std(to, from, n);
uaccess_restore(ua_flags);
@@ -258,6 +276,7 @@ arm_copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
unsigned long __must_check
arm_copy_from_user(void *to, const void __user *from, unsigned long n)
{
@@ -283,7 +302,8 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n)
#endif
return n;
}
-
+#endif
+
static unsigned long noinline
__clear_user_memset(void __user *addr, unsigned long n)
{
diff --git a/arch/arm/mm/cache-v6.S b/arch/arm/mm/cache-v6.S
index 868011801521..614d4ff2a760 100644
--- a/arch/arm/mm/cache-v6.S
+++ b/arch/arm/mm/cache-v6.S
@@ -198,7 +198,11 @@ ENTRY(v6_flush_kern_dcache_area)
* - start - virtual start address of region
* - end - virtual end address of region
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENTRY(v6_dma_inv_range)
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+v6_dma_inv_range:
+#endif
#ifdef CONFIG_DMA_CACHE_RWFO
ldrb r2, [r0] @ read for ownership
strb r2, [r0] @ write for ownership
@@ -243,7 +247,11 @@ ENTRY(v6_dma_inv_range)
* - start - virtual start address of region
* - end - virtual end address of region
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENTRY(v6_dma_clean_range)
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+v6_dma_clean_range:
+#endif
bic r0, r0, #D_CACHE_LINE_SIZE - 1
1:
#ifdef CONFIG_DMA_CACHE_RWFO
diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
index 536df5db66e4..58df59734e40 100644
--- a/arch/arm/mm/cache-v7.S
+++ b/arch/arm/mm/cache-v7.S
@@ -363,8 +363,12 @@ ENDPROC(v7_flush_kern_dcache_area)
* - start - virtual start address of region
* - end - virtual end address of region
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENTRY(b15_dma_inv_range)
ENTRY(v7_dma_inv_range)
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+v7_dma_inv_range:
+#endif
dcache_line_size r2, r3
sub r3, r2, #1
tst r0, r3
@@ -394,8 +398,12 @@ ENDPROC(v7_dma_inv_range)
* - start - virtual start address of region
* - end - virtual end address of region
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
ENTRY(b15_dma_clean_range)
ENTRY(v7_dma_clean_range)
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+v7_dma_clean_range:
+#endif
dcache_line_size r2, r3
sub r3, r2, #1
bic r0, r0, r3
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
index b3a2fce22eac..b651eaa1ee40 100644
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -71,6 +71,7 @@ ENDPROC(cpu_v6_reset)
* IRQs are already disabled.
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* See jira SW-5991 for details of this workaround */
ENTRY(cpu_v6_do_idle)
.align 5
@@ -84,6 +85,13 @@ ENTRY(cpu_v6_do_idle)
nop
bne 1b
ret lr
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ENTRY(cpu_v6_do_idle)
+ mov r1, #0
+ mcr p15, 0, r1, c7, c10, 4 @ DWB - WFI may enter a low-power mode
+ mcr p15, 0, r1, c7, c0, 4 @ wait for interrupt
+ ret lr
+#endif
ENTRY(cpu_v6_dcache_clean_area)
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 3e77e8982df3..ef0e00249515 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -287,8 +287,10 @@ __v7_ca17mp_setup:
mov r10, #0
1: adr r0, __v7_setup_stack_ptr
ldr r12, [r0]
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
tst r12, #0x1f
addeq r12, r12, #4
+#endif
add r12, r12, r0 @ the local stack
stmia r12, {r1-r6, lr} @ v7_invalidate_l1 touches r0-r6
bl v7_invalidate_l1
@@ -476,8 +478,10 @@ __v7_setup:
adr r0, __v7_setup_stack_ptr
ldr r12, [r0]
add r12, r12, r0 @ the local stack
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
tst r12, #0x1f
addeq r12, r12, #4
+#endif
stmia r12, {r1-r6, lr} @ v7_invalidate_l1 touches r0-r6
bl v7_invalidate_l1
ldmia r12, {r1-r6, lr}
@@ -561,7 +565,11 @@ ENDPROC(__v7_setup)
.bss
.align 2
__v7_setup_stack:
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
.space 4 * 8 @ 7 registers + 1 spare
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ .space 4 * 7 @ 7 registers
+#endif
__INITDATA
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 1e2dcf81aefa..f7238e00db7c 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -176,11 +176,16 @@ static int vfp_notifier(struct notifier_block *self, unsigned long cmd, void *v)
* case the thread migrates to a different CPU. The
* restoring is done lazily.
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
/* vfp_save_state oopses on VFP11 if EX bit set */
fmxr(FPEXC, fpexc & ~FPEXC_EX);
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
}
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu])
+ vfp_save_state(vfp_current_hw_state[cpu], fpexc);
+#endif
#endif
/*
@@ -457,16 +462,22 @@ static int vfp_pm_suspend(void)
/* if vfp is on, then save state for resumption */
if (fpexc & FPEXC_EN) {
pr_debug("%s: saving vfp state\n", __func__);
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* vfp_save_state oopses on VFP11 if EX bit set */
fmxr(FPEXC, fpexc & ~FPEXC_EX);
+#endif
vfp_save_state(&ti->vfpstate, fpexc);
/* disable, just in case */
fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
} else if (vfp_current_hw_state[ti->cpu]) {
#ifndef CONFIG_SMP
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* vfp_save_state oopses on VFP11 if EX bit set */
fmxr(FPEXC, (fpexc & ~FPEXC_EX) | FPEXC_EN);
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ fmxr(FPEXC, fpexc | FPEXC_EN);
+#endif
vfp_save_state(vfp_current_hw_state[ti->cpu], fpexc);
fmxr(FPEXC, fpexc);
#endif
@@ -529,8 +540,12 @@ void vfp_sync_hwstate(struct thread_info *thread)
/*
* Save the last VFP state on this CPU.
*/
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* vfp_save_state oopses on VFP11 if EX bit set */
fmxr(FPEXC, (fpexc & ~FPEXC_EX) | FPEXC_EN);
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ fmxr(FPEXC, fpexc | FPEXC_EN);
+#endif
vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
fmxr(FPEXC, fpexc);
}
@@ -596,7 +611,9 @@ int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc)
struct thread_info *thread = current_thread_info();
struct vfp_hard_struct *hwstate = &thread->vfpstate.hard;
unsigned long fpexc;
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
u32 fpsid = fmrx(FPSID);
+#endif
/* Disable VFP to avoid corrupting the new thread state. */
vfp_flush_hwstate(thread);
@@ -619,11 +636,16 @@ int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc)
/* Ensure the VFP is enabled. */
fpexc |= FPEXC_EN;
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* Mask FPXEC_EX and FPEXC_FP2V if not required by VFP arch */
if ((fpsid & FPSID_ARCH_MASK) != (1 << FPSID_ARCH_BIT)) {
/* Ensure FPINST2 is invalid and the exception flag is cleared. */
fpexc &= ~(FPEXC_EX | FPEXC_FP2V);
}
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ /* Ensure FPINST2 is invalid and the exception flag is cleared. */
+ fpexc &= ~(FPEXC_EX | FPEXC_FP2V);
+#endif
hwstate->fpexc = fpexc;
@@ -738,8 +760,12 @@ void kernel_neon_begin(void)
cpu = get_cpu();
fpexc = fmrx(FPEXC) | FPEXC_EN;
+#ifdef CONFIG_OPENEULER_RASPBERRYPI
/* vfp_save_state oopses on VFP11 if EX bit set */
fmxr(FPEXC, fpexc & ~FPEXC_EX);
+#else /* !CONFIG_OPENEULER_RASPBERRYPI */
+ fmxr(FPEXC, fpexc);
+#endif
/*
* Save the userland NEON/VFP state. Under UP,
--
2.20.1
2
1

[PATCH openEuler-1.0-LTS] sysctl: control if check validity of freelist and connector debug by sysctl
by Cheng Jian 11 Mar '21
by Cheng Jian 11 Mar '21
11 Mar '21
From: Yang Yingliang <yangyingliang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 47452
CVE: NA
-------------------------------------------------
Change control methods of check validity of freelist and
connector debug to sysctl.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
drivers/connector/connector.c | 1 +
include/linux/slab.h | 4 ---
kernel/sysctl.c | 20 +++++++++++
mm/slub.c | 63 ++++++++++++++---------------------
4 files changed, 46 insertions(+), 42 deletions(-)
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index a8df9ecbf42b..b5cdd0b99736 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -42,6 +42,7 @@ MODULE_ALIAS_NET_PF_PROTO(PF_NETLINK, NETLINK_CONNECTOR);
static struct cn_dev cdev;
static int cn_already_initialized;
+int sysctl_connector_debug = 0;
/*
* Sends mult (multiple) cn_msg at a time.
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 1ab5fd97dec8..d6393413ef09 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -24,8 +24,6 @@
*/
/* DEBUG: Perform (expensive) checks on alloc/free */
#define SLAB_CONSISTENCY_CHECKS ((slab_flags_t __force)0x00000100U)
-/* Check if freelist is valid */
-#define SLAB_NO_FREELIST_CHECKS ((slab_flags_t __force)0x00000200U)
/* DEBUG: Red zone objs in a cache */
#define SLAB_RED_ZONE ((slab_flags_t __force)0x00000400U)
/* DEBUG: Poison objects */
@@ -113,8 +111,6 @@
#define SLAB_KASAN 0
#endif
-#define SLAB_CONNECTOR_DEBUG ((slab_flags_t __force)0x10000000U)
-
/* The following flags affect the page allocator grouping pages by mobility */
/* Objects are reclaimable */
#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 665c9e2a8802..c921ee10615a 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1245,6 +1245,7 @@ static struct ctl_table kern_table[] = {
{ }
};
+extern int sysctl_isolate_corrupted_freelist;
static struct ctl_table vm_table[] = {
{
.procname = "overcommit_memory",
@@ -1714,6 +1715,15 @@ static struct ctl_table vm_table[] = {
.extra2 = (void *)&mmap_rnd_compat_bits_max,
},
#endif
+ {
+ .procname = "isolate_corrupted_freelist",
+ .data = &sysctl_isolate_corrupted_freelist,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &one,
+ },
{ }
};
@@ -1925,6 +1935,7 @@ static struct ctl_table fs_table[] = {
{ }
};
+extern int sysctl_connector_debug;
static struct ctl_table debug_table[] = {
#ifdef CONFIG_SYSCTL_EXCEPTION_TRACE
{
@@ -1946,6 +1957,15 @@ static struct ctl_table debug_table[] = {
.extra2 = &one,
},
#endif
+ {
+ .procname = "connector-debug",
+ .data = &sysctl_connector_debug,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &one,
+ },
{ }
};
diff --git a/mm/slub.c b/mm/slub.c
index ee0d18699bde..2301c04353d0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1221,30 +1221,6 @@ static noinline int free_debug_processing(
return ret;
}
-/*
- * Check if the next object in freechain is
- * valid, then isolate the corrupted freelist.
- */
-int isolate_cnt = 0;
-static bool isolate_corrupted_freelist(struct kmem_cache *s, struct page *page,
- void **freelist, void *next)
-{
- if (!(slub_debug & SLAB_NO_FREELIST_CHECKS) &&
- (!check_valid_pointer(s, page, next))) {
- /* Need caller make sure freelist is not NULL */
- *freelist = NULL;
- isolate_cnt++;
-
- slab_fix(s, "Freelist corrupt,isolate corrupted freechain.");
- pr_info("freelist=%lx object=%lx, page->base=%lx, page->objects=%u, objsize=%u, s->offset=%d page:%llx\n",
- (long)freelist, (long)next, (long)page_address(page), page->objects, s->size, s->offset, (u64)page);
-
- return true;
- }
-
- return false;
-}
-
static int __init setup_slub_debug(char *str)
{
slub_debug = DEBUG_DEFAULT_FLAGS;
@@ -1291,14 +1267,6 @@ static int __init setup_slub_debug(char *str)
case 'a':
slub_debug |= SLAB_FAILSLAB;
break;
- case 'n':
- slub_debug |= SLAB_NO_FREELIST_CHECKS;
- pr_info("Freelist pointer check disabled.");
- break;
- case 'd':
- slub_debug |= SLAB_CONNECTOR_DEBUG;
- pr_info("Connector debug enabled.\n");
- break;
case 'o':
/*
* Avoid enabling debugging on caches if its minimum
@@ -1311,8 +1279,6 @@ static int __init setup_slub_debug(char *str)
*str);
}
}
- if (!(slub_debug & SLAB_NO_FREELIST_CHECKS))
- pr_info("Freelist pointer check enabled.");
check_slabs:
if (*str == ',')
@@ -1380,12 +1346,33 @@ static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
{
return false;
}
+#endif /* CONFIG_SLUB_DEBUG */
+
+extern int sysctl_connector_debug;
+/*
+ * Check if the next object in freechain is
+ * valid, then isolate the corrupted freelist.
+ */
+int isolate_cnt = 0;
+int sysctl_isolate_corrupted_freelist = 1;
static bool isolate_corrupted_freelist(struct kmem_cache *s, struct page *page,
- void **freelist, void *nextfree)
+ void **freelist, void *next)
{
+ if (sysctl_isolate_corrupted_freelist &&
+ (!check_valid_pointer(s, page, next))) {
+ /* Need caller make sure freelist is not NULL */
+ *freelist = NULL;
+ isolate_cnt++;
+
+ slab_fix(s, "Freelist corrupt,isolate corrupted freechain.");
+ pr_info("freelist=%lx object=%lx, page->base=%lx, page->objects=%u, objsize=%u, s->offset=%d page:%llx\n",
+ (long)freelist, (long)next, (long)page_address(page), page->objects, s->size, s->offset, (u64)page);
+
+ return true;
+ }
+
return false;
}
-#endif /* CONFIG_SLUB_DEBUG */
/*
* Hooks for other subsystems that check memory allocations. In a typical
@@ -2798,7 +2785,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
if (unlikely(gfpflags & __GFP_ZERO) && object)
memset(object, 0, s->object_size);
- if ((slub_debug & SLAB_CONNECTOR_DEBUG) &&
+ if (sysctl_connector_debug &&
unlikely(gfpflags & __GFP_CONNECTOR) && object) {
if (s->object_size == 512)
memset(object + 504, 0xad, 8);
@@ -4015,7 +4002,7 @@ void kfree(const void *x)
__free_pages(page, compound_order(page));
return;
}
- if ((slub_debug & SLAB_CONNECTOR_DEBUG) &&
+ if (sysctl_connector_debug &&
unlikely(page->slab_cache->object_size == 512)) {
u64 *tail = (u64 *)(x + 504);
--
2.25.1
1
0
bugfix for 20.03 at 202102
Alexey Gladkov (1):
moduleparam: Save information about built-in modules in separate file
Amir Goldstein (1):
ovl: skip getxattr of security labels
Aurelien Aptel (1):
cifs: report error instead of invalid when revalidating a dentry fails
Baptiste Lepers (1):
udp: Prevent reuseport_select_sock from reading uninitialized socks
Byron Stanoszek (1):
tmpfs: restore functionality of nr_inodes=0
Chris Down (2):
tmpfs: per-superblock i_ino support
tmpfs: support 64-bit inums per-sb
Christian Brauner (1):
sysctl: handle overflow in proc_get_long
Cong Wang (1):
af_key: relax availability checks for skb size calculation
Dave Wysochanski (2):
SUNRPC: Move simple_get_bytes and simple_get_netobj into private
header
SUNRPC: Handle 0 length opaque XDR object data properly
Dong Kai (1):
livepatch/core: Fix jump_label_apply_nops called multi times
Edwin Peer (1):
net: watchdog: hold device global xmit lock during tx disable
Eric Biggers (1):
fs: fix lazytime expiration handling in __writeback_single_inode()
Eric Dumazet (1):
net_sched: gen_estimator: support large ewma log
Eyal Birger (1):
xfrm: fix disable_xfrm sysctl when used on xfrm interfaces
Florian Westphal (1):
netfilter: conntrack: skip identical origin tuple in same zone only
Gaurav Kohli (1):
tracing: Fix race in trace_open and buffer resize call
Gustavo A. R. Silva (1):
smb3: Fix out-of-bounds bug in SMB2_negotiate()
Hugh Dickins (1):
mm: thp: fix MADV_REMOVE deadlock on shmem THP
Jakub Kicinski (1):
net: sit: unregister_netdevice on newlink's error path
Jan Kara (1):
writeback: Drop I_DIRTY_TIME_EXPIRE
Jozsef Kadlecsik (1):
netfilter: xt_recent: Fix attempt to update deleted entry
Liangyan (1):
ovl: fix dentry leak in ovl_get_redirect
Lin Feng (1):
bfq-iosched: Revert "bfq: Fix computation of shallow depth"
Marc Zyngier (1):
genirq/msi: Activate Multi-MSI early when MSI_FLAG_ACTIVATE_EARLY is
set
Martin K. Petersen (2):
scsi: sd: block: Fix regressions in read-only block device handling
scsi: sd: block: Fix kabi change by 'scsi: sd: block: Fix regressions
in read-only block device handling'
Martin Willi (1):
vrf: Fix fast path output packet handling with async Netfilter rules
Masami Hiramatsu (1):
tracing/kprobe: Fix to support kretprobe events on unloaded modules
Miklos Szeredi (4):
proc/mounts: add cursor
ovl: perform vfs_getxattr() with mounter creds
cap: fix conversions on getxattr
ovl: expand warning in ovl_d_real()
Mikulas Patocka (1):
dm integrity: conditionally disable "recalculate" feature
Ming Lei (4):
scsi: core: Run queue in case of I/O resource contention failure
scsi: core: Only re-run queue in scsi_end_request() if device queue is
busy
block: don't hold q->sysfs_lock in elevator_init_mq
blk-mq: don't hold q->sysfs_lock in blk_mq_map_swqueue
Muchun Song (4):
mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page
mm: hugetlb: fix a race between freeing and dissolving the page
mm: hugetlb: fix a race between isolating and freeing page
mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active
NeilBrown (1):
net: fix iteration for sctp transport seq_files
Ondrej Jirman (1):
brcmfmac: Loading the correct firmware for brcm43456
Pablo Neira Ayuso (1):
netfilter: nft_dynset: add timeout extension to template
Pavel Begunkov (1):
list: introduce list_for_each_continue()
Pengcheng Yang (1):
tcp: fix TLP timer not set when CA_STATE changes from DISORDER to OPEN
Peter Zijlstra (2):
kthread: Extract KTHREAD_IS_PER_CPU
workqueue: Restrict affinity change to rescuer
Roi Dayan (1):
net/mlx5: Fix memory leak on flow table creation error flow
Roman Gushchin (1):
memblock: do not start bottom-up allocations with kernel_end
Sabyrzhan Tasbolatov (1):
net/rds: restrict iovecs length for RDS_CMSG_RDMA_ARGS
Shmulik Ladkani (1):
xfrm: Fix oops in xfrm_replay_advance_bmp
Steven Rostedt (VMware) (3):
fgraph: Initialize tracing_graph_pause at task creation
tracing: Do not count ftrace events in top level enable output
tracing: Check length before giving out the filter buffer
Sven Auhagen (1):
netfilter: flowtable: fix tcp and udp header checksum update
Vadim Fedorenko (1):
net: ip_tunnel: fix mtu calculation
Wang Hai (1):
Revert "mm/slub: fix a memory leak in sysfs_slab_add()"
Wang ShaoBo (1):
kretprobe: Avoid re-registration of the same kretprobe earlier
Willem de Bruijn (1):
esp: avoid unneeded kmap_atomic call
Xiao Ni (1):
md: Set prev_flush_start and flush_bio in an atomic way
Yang Yingliang (3):
sysctl/mm: fix compile error when CONFIG_SLUB is disabled
config: disable config TMPFS_INODE64 by default
connector: change GFP_CONNECTOR to bit-31
Ye Bin (3):
scsi: sd: block: Fix read-only flag residuals when partition table
change
Revert "scsi: sd: block: Fix read-only flag residuals when partition
table change"
Revert "scsi: sg: fix memory leak in sg_build_indirect"
Yonglong Liu (4):
net: hns3: adds support for setting pf max tx rate via sysfs
net: hns3: update hns3 version to 1.9.38.10
net: hns3: fix 'ret' may be used uninitialized problem
net: hns3: update hns3 version to 1.9.38.11
Yufen Yu (1):
scsi: fix kabi for scsi_device
Zhang Xiaoxu (1):
proc/mounts: Fix kabi broken
liubo (3):
etmem: add etmem-scan feature
etmem: add etmem-swap feature
config: Enable the config option of the etmem feature
shiyongbang (3):
gpu: hibmc: Fix erratic display during startup stage.
gpu: hibmc: Use drm get pci dev api.
gpu: hibmc: Fix stuck when switch GUI to text.
zhangyi (F) (1):
ext4: find old entry again if failed to rename whiteout
.gitignore | 1 +
Documentation/device-mapper/dm-integrity.txt | 7 +
Documentation/dontdiff | 1 +
Documentation/filesystems/tmpfs.txt | 17 +
Documentation/kbuild/kbuild.txt | 5 +
Makefile | 2 +
arch/arm64/configs/hulk_defconfig | 3 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/hulk_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
block/bfq-iosched.c | 8 +-
block/blk-mq.c | 7 -
block/elevator.c | 14 +-
block/genhd.c | 33 +-
block/ioctl.c | 4 +
block/partition-generic.c | 7 +-
drivers/connector/connector.c | 6 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c | 68 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 2 +
.../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 1 +
drivers/md/dm-integrity.c | 24 +-
drivers/md/md.c | 2 +
drivers/net/ethernet/hisilicon/hns3/Makefile | 3 +-
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 10 +-
.../hns3/hns3_cae/hns3_cae_version.h | 2 +-
.../net/ethernet/hisilicon/hns3/hns3_enet.h | 2 +-
.../hns3_extension/hns3pf/hclge_main_it.c | 33 +
.../hns3/hns3_extension/hns3pf/hclge_sysfs.c | 140 +++
.../hns3/hns3_extension/hns3pf/hclge_sysfs.h | 14 +
.../hisilicon/hns3/hns3pf/hclge_main.c | 17 +
.../hisilicon/hns3/hns3pf/hclge_main.h | 7 +-
.../hisilicon/hns3/hns3vf/hclgevf_main.h | 2 +-
.../net/ethernet/mellanox/mlx5/core/fs_core.c | 1 +
drivers/net/vrf.c | 92 +-
.../broadcom/brcm80211/brcmfmac/sdio.c | 4 +-
drivers/scsi/scsi_lib.c | 59 +-
drivers/scsi/sg.c | 6 +-
fs/Kconfig | 21 +
fs/cifs/dir.c | 22 +-
fs/cifs/smb2pdu.h | 2 +-
fs/ext4/inode.c | 2 +-
fs/ext4/namei.c | 29 +-
fs/fs-writeback.c | 36 +-
fs/hugetlbfs/inode.c | 3 +-
fs/mount.h | 17 +-
fs/namespace.c | 119 +-
fs/overlayfs/copy_up.c | 15 +-
fs/overlayfs/dir.c | 2 +-
fs/overlayfs/inode.c | 2 +
fs/overlayfs/super.c | 13 +-
fs/proc/Makefile | 2 +
fs/proc/base.c | 4 +
fs/proc/etmem_scan.c | 1046 +++++++++++++++++
fs/proc/etmem_scan.h | 132 +++
fs/proc/etmem_swap.c | 102 ++
fs/proc/internal.h | 2 +
fs/proc/task_mmu.c | 117 ++
fs/proc_namespace.c | 4 +-
fs/xfs/xfs_trans_inode.c | 4 +-
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/fs.h | 16 +-
include/linux/genhd.h | 5 +
include/linux/gfp.h | 8 +-
include/linux/hugetlb.h | 3 +
include/linux/kprobes.h | 2 +-
include/linux/kthread.h | 3 +
include/linux/list.h | 10 +
include/linux/mm_types.h | 18 +
include/linux/module.h | 1 +
include/linux/moduleparam.h | 12 +-
include/linux/mount.h | 4 +-
include/linux/msi.h | 6 +
include/linux/netdevice.h | 2 +
include/linux/shmem_fs.h | 3 +
include/linux/sunrpc/xdr.h | 3 +-
include/linux/swap.h | 5 +
include/net/tcp.h | 2 +-
include/scsi/scsi_device.h | 3 +-
include/trace/events/writeback.h | 1 -
init/init_task.c | 3 +-
kernel/irq/msi.c | 44 +-
kernel/kprobes.c | 36 +-
kernel/kthread.c | 27 +-
kernel/livepatch/core.c | 20 +-
kernel/smpboot.c | 1 +
kernel/sysctl.c | 6 +
kernel/trace/ftrace.c | 2 -
kernel/trace/ring_buffer.c | 4 +
kernel/trace/trace.c | 2 +-
kernel/trace/trace_events.c | 3 +-
kernel/trace/trace_kprobe.c | 4 +-
kernel/workqueue.c | 9 +-
lib/Kconfig | 11 +
mm/huge_memory.c | 37 +-
mm/hugetlb.c | 48 +-
mm/memblock.c | 49 +-
mm/pagewalk.c | 1 +
mm/shmem.c | 133 ++-
mm/slub.c | 14 +-
mm/vmscan.c | 112 ++
mm/vmstat.c | 4 +
net/core/gen_estimator.c | 11 +-
net/core/sock_reuseport.c | 2 +-
net/ipv4/esp4.c | 7 +-
net/ipv4/ip_tunnel.c | 16 +-
net/ipv4/tcp_input.c | 10 +-
net/ipv4/tcp_recovery.c | 5 +-
net/ipv6/esp6.c | 7 +-
net/ipv6/sit.c | 5 +-
net/key/af_key.c | 6 +-
net/netfilter/nf_conntrack_core.c | 3 +-
net/netfilter/nf_flow_table_core.c | 4 +-
net/netfilter/nft_dynset.c | 4 +-
net/netfilter/xt_recent.c | 12 +-
net/rds/rdma.c | 3 +
net/sctp/proc.c | 16 +-
net/sunrpc/auth_gss/auth_gss.c | 30 +-
net/sunrpc/auth_gss/auth_gss_internal.h | 45 +
net/sunrpc/auth_gss/gss_krb5_mech.c | 31 +-
net/xfrm/xfrm_input.c | 2 +-
net/xfrm/xfrm_policy.c | 4 +-
scripts/link-vmlinux.sh | 3 +
security/commoncap.c | 67 +-
virt/kvm/kvm_main.c | 6 +
124 files changed, 2817 insertions(+), 466 deletions(-)
create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3_extension/hns3pf/hclge_sysfs.c
create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3_extension/hns3pf/hclge_sysfs.h
create mode 100644 fs/proc/etmem_scan.c
create mode 100644 fs/proc/etmem_scan.h
create mode 100644 fs/proc/etmem_swap.c
create mode 100644 net/sunrpc/auth_gss/auth_gss_internal.h
--
2.25.1
1
82
From: Pavel Begunkov <asml.silence(a)gmail.com>
mainline inclusion
from mainline-v5.6-rc1
commit 28ca0d6d39ab1d01c86762c82a585b7cedd2920c
category: bugfix
bugzilla: 35619
CVE: NA
--------------------------------
As other *continue() helpers, this continues iteration from a given
position.
Signed-off-by: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: zhangyi (F) <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/list.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/linux/list.h b/include/linux/list.h
index de04cc5ed536..0e540581d52c 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -455,6 +455,16 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_for_each(pos, head) \
for (pos = (head)->next; pos != (head); pos = pos->next)
+/**
+ * list_for_each_continue - continue iteration over a list
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head for your list.
+ *
+ * Continue to iterate over a list, continuing after the current position.
+ */
+#define list_for_each_continue(pos, head) \
+ for (pos = pos->next; pos != (head); pos = pos->next)
+
/**
* list_for_each_prev - iterate over a list backwards
* @pos: the &struct list_head to use as a loop cursor.
--
2.25.1
1
11

[PATCH openEuler-21.03 v1] park: Reserve park mem before kexec reserved
by sangyan@huawei.com 10 Mar '21
by sangyan@huawei.com 10 Mar '21
10 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
------------------------------
reserve_crashkernel or reserve_quick_kexec may find one sutiable
memory region and reserves it, which address of the region is
not fixed.
As a result, cpu park reserves memory could be failed while
specified address used by crashkernel or quickkexec.
So, move reserve_park_mem before reserve_crashkernel and
reserve_quick_kexec.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/mm/init.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b343744..043a981 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -497,16 +497,16 @@ void __init arm64_memblock_init(void)
else
arm64_dma32_phys_limit = PHYS_MASK + 1;
+#ifdef CONFIG_ARM64_CPU_PARK
+ reserve_park_mem();
+#endif
+
reserve_crashkernel();
#ifdef CONFIG_QUICK_KEXEC
reserve_quick_kexec();
#endif
-#ifdef CONFIG_ARM64_CPU_PARK
- reserve_park_mem();
-#endif
-
reserve_pin_memory_res();
reserve_elfcorehdr();
--
2.9.5
1
0

10 Mar '21
From: ZhuLing <zhuling8(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: NA
Register pmem in arm64:
Use memmap(memmap=nn[KMG]!ss[KMG]) reserve memory and
e820(driver/nvdimm/e820.c) function to register persistent
memory in arm64. when the kernel restart or update, the data
in PMEM will not be lost and can be loaded faster. this is a
general features.
driver/nvdimm/e820.c:
The function of this file is scan "iomem_resource" and take
advantage of nvdimm resource discovery mechanism by registering
a resource named "Persistent Memory (legacy)", this function
doesn't depend on architecture.
We will push the feature to linux kernel community and discuss to
modify the file name. because people have a mistaken notion that
the e820.c is depend on x86.
If you want use this features, you need do as follows:
1.Reserve memory: add memmap to reserve memory in grub.cfg
memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000.
2.Insmod nd_e820.ko: modprobe nd_e820.
3.Check pmem device in /dev exp: /dev/pmem0
Signed-off-by: ZhuLing <zhuling8(a)huawei.com>
---
arch/arm64/Kconfig | 21 +++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pmem.c | 35 ++++++++++++++
arch/arm64/kernel/setup.c | 10 ++++
arch/arm64/mm/init.c | 97 ++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Kconfig | 5 ++
drivers/nvdimm/Makefile | 2 +-
7 files changed, 170 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/pmem.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c451137ab..326f26d40 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -360,6 +360,27 @@ config ARM64_CPU_PARK
config ARCH_HAS_CPU_RELAX
def_bool y
+config ARM64_PMEM_RESERVE
+ bool "Reserve memory for persistent storage"
+ default n
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
+ memory for persistent storage.
+
+ Say y here to enable this feature.
+
+config ARM64_PMEM_LEGACY_DEVICE
+ bool "Create persistent storage"
+ depends on BLK_DEV
+ depends on LIBNVDIMM
+ select ARM64_PMEM_RESERVE
+ help
+ Use reserved memory for persistent storage when the kernel
+ restart or update. the data in PMEM will not be lost and
+ can be loaded faster.
+
+ Say y if unsure.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 967cb3c6d..be996f3c1 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-$(CONFIG_MPAM) += mpam/
+obj-$(CONFIG_ARM64_PMEM_LEGACY_DEVICE) += pmem.o
obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
diff --git a/arch/arm64/kernel/pmem.c b/arch/arm64/kernel/pmem.c
new file mode 100644
index 000000000..16eaf706f
--- /dev/null
+++ b/arch/arm64/kernel/pmem.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Derived from x86 and arm64 implement PMEM.
+ */
+#include <linux/platform_device.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+
+static int found(struct resource *res, void *data)
+{
+ return 1;
+}
+
+static int __init register_e820_pmem(void)
+{
+ struct platform_device *pdev;
+ int rc;
+
+ rc = walk_iomem_res_desc(IORES_DESC_PERSISTENT_MEMORY_LEGACY,
+ IORESOURCE_MEM, 0, -1, NULL, found);
+ if (rc <= 0)
+ return 0;
+
+ /*
+ * See drivers/nvdimm/e820.c for the implementation, this is
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+device_initcall(register_e820_pmem);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 5e282d31a..84c71c88d 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -57,6 +57,10 @@
static int num_standard_resources;
static struct resource *standard_resources;
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+extern struct resource pmem_res;
+#endif
+
phys_addr_t __fdt_pointer __initdata;
/*
@@ -270,6 +274,12 @@ static void __init request_standard_resources(void)
request_resource(res, &pin_memory_resource);
#endif
}
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ if (pmem_res.end && pmem_res.start)
+ request_resource(&iomem_resource, &pmem_res);
+#endif
+
}
static int __init reserve_memblock_reserved_regions(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b3437440d..f22faea1a 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,6 +66,18 @@ EXPORT_SYMBOL(memstart_addr);
phys_addr_t arm64_dma_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+static unsigned long long pmem_size, pmem_start;
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+struct resource pmem_res = {
+ .name = "Persistent Memory (legacy)",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_PERSISTENT_MEMORY_LEGACY
+};
+#endif
+
#ifndef CONFIG_KEXEC_CORE
static void __init reserve_crashkernel(void)
{
@@ -378,6 +390,87 @@ static int __init reserve_park_mem(void)
}
#endif
+static int __init is_mem_valid(unsigned long long mem_size, unsigned long long mem_start)
+{
+ if (!memblock_is_region_memory(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region is not memory!\n");
+ return -EINVAL;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region overlaps reserved memory!\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve mem: base address is not 2MB aligned!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init parse_memmap_one(char *p)
+{
+ char *oldp;
+ phys_addr_t start_at, mem_size;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ if (!mem_size)
+ return -EINVAL;
+
+ mem_size = PAGE_ALIGN(mem_size);
+
+ if (*p == '!') {
+ start_at = memparse(p+1, &p);
+
+ if (is_mem_valid(mem_size, start_at) != 0)
+ return -EINVAL;
+
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ start_at, start_at + mem_size, mem_size >> 20);
+ pmem_start = start_at;
+ pmem_size = mem_size;
+ } else
+ pr_info("Unrecognized memmap option, please check the parameter.\n");
+
+ return *p == '\0' ? 0 : -EINVAL;
+}
+
+static int __init parse_memmap_opt(char *str)
+{
+ while (str) {
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ parse_memmap_one(str);
+ str = k;
+ }
+
+ return 0;
+}
+early_param("memmap", parse_memmap_opt);
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+static void __init reserve_pmem(void)
+{
+ memblock_remove(pmem_start, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_start, pmem_start + pmem_size, pmem_size >> 20);
+ pmem_res.start = pmem_start;
+ pmem_res.end = pmem_start + pmem_size - 1;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -511,6 +604,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig
index b7d1eb38b..ce4de7526 100644
--- a/drivers/nvdimm/Kconfig
+++ b/drivers/nvdimm/Kconfig
@@ -132,3 +132,8 @@ config NVDIMM_TEST_BUILD
infrastructure.
endif
+
+config PMEM_LEGACY
+ tristate "Pmem_legacy"
+ select X86_PMEM_LEGACY if X86
+ select ARM64_PMEM_LEGACY_DEVICE if ARM64
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index 29203f3d3..6f8dc9242 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_LIBNVDIMM) += libnvdimm.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
-obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
+obj-$(CONFIG_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o nd_virtio.o
--
2.19.1
1
1
ok!
1
0
4
7
From: Fang Yafen <yafen(a)iscas.ac.cn>
Fix the following compile error when using bcm2711_defconfig (for RPi).
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `set -e;
echo ' DTCO arch/arm64/boot/dts/overlays/act-led.dtbo';
mkdir -p arch/arm64/boot/dts/overlays/ ;
gcc -E -Wp,-MMD,arch/arm64/boot/dts/overlays/.act-led.dtbo.d.pre.tmp
-nostdinc -I./scripts/dtc/include-prefixes -undef -D__DTS__
-x assembler-with-cpp
-o arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp
arch/arm64/boot/dts/overlays/act-led-overlay.dts ;
./scripts/dtc/dtc -@ -H epapr -O dtb
-o arch/arm64/boot/dts/overlays/act-led.dtbo -b 0
-i arch/arm64/boot/dts/overlays/ -Wno-interrupt_provider
-Wno-unit_address_vs_reg -Wno-unit_address_format -Wno-gpios_property
-Wno-avoid_unnecessary_addr_size -Wno-alias_paths
-Wno-graph_child_address -Wno-simple_bus_reg
-Wno-unique_unit_address -Wno-pci_device_reg
-Wno-interrupts_property ifeq (y,y) -Wno-label_is_string
-Wno-reg_format -Wno-pci_device_bus_num -Wno-i2c_bus_reg
-Wno-spi_bus_reg -Wno-avoid_default_addr_size endif
-d arch/arm64/boot/dts/overlays/.act-led.dtbo.d.dtc.tmp
arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp ;
cat ...; rm -f arch/arm64/boot/dts/overlays/.act-led.dtbo.d'
make[2]: *** [scripts/Makefile.lib;363:
arch/arm64/boot/dts/overlays/act-led.dtbo] Error 1
make[2]: *** Waiting for unfinished jobs....
Related patches:
ffa2d13ccc3 BCM2708: Add core Device Tree support
4894352ec98 kbuild: Silence unavoidable dtc overlay warnings
a4a4d07f0cf kbuild: keep the original function for non-RPi
Signed-off-by: Fang Yafen <yafen(a)iscas.ac.cn>
---
scripts/Makefile.lib | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index c23e3ae7ef40..a0e0e2543165 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -342,25 +342,25 @@ endef
$(obj)/%.dt.yaml: $(src)/%.dts $(DTC) $(DT_TMP_SCHEMA) FORCE
$(call if_changed_rule,dtc,yaml)
+ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y)
quiet_cmd_dtco = DTCO $@
cmd_dtco = mkdir -p $(dir ${dtc-tmp}) ; \
$(CPP) $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ; \
$(DTC) -@ -H epapr -O dtb -o $@ -b 0 \
-i $(dir $<) $(DTC_FLAGS) \
-Wno-interrupts_property \
-ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y) \
-Wno-label_is_string \
-Wno-reg_format \
-Wno-pci_device_bus_num \
-Wno-i2c_bus_reg \
-Wno-spi_bus_reg \
-Wno-avoid_default_addr_size \
-endif \
-d $(depfile).dtc.tmp $(dtc-tmp) ; \
cat $(depfile).pre.tmp $(depfile).dtc.tmp > $(depfile)
$(obj)/%.dtbo: $(src)/%-overlay.dts FORCE
$(call if_changed_dep,dtco)
+endif
dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp)
--
2.27.0
2
1

[PATCH kernel-4.19 01/32] ascend: sharepool: don't enable the vmalloc to use hugepage default
by Yang Yingliang 08 Mar '21
by Yang Yingliang 08 Mar '21
08 Mar '21
From: Ding Tianhong <dingtianhong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The commit 59a57a82fb2a ("mm/vmalloc: Hugepage vmalloc mappings")
would enable the vmalloc for hugepage default when the alloc size
is bigger than the PMD_SIZE, it looks like the transparent hugepage
for mmap, the driver could not control the hugepage accurately and
be break the logic, now the share pool already export the
vmalloc_hugepage_xxx function to control the vmalloc hugepage allocation,
it looks like the static hugepage for vmalloc, so disable the transparent
hugepage function.
This patch also fix the problem of breaking the kabi of vm_struct,
the user could applied it for commercial version.
Fixes: 59a57a82fb2a ("mm/vmalloc: Hugepage vmalloc mappings")
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/share_pool.h | 51 ++++++++++++++++++++++++++++----------
include/linux/vmalloc.h | 1 -
mm/vmalloc.c | 47 ++++++++++++-----------------------
3 files changed, 54 insertions(+), 45 deletions(-)
diff --git a/include/linux/share_pool.h b/include/linux/share_pool.h
index c3120b7b24948..4a18c88d5a10e 100644
--- a/include/linux/share_pool.h
+++ b/include/linux/share_pool.h
@@ -211,15 +211,6 @@ static inline void sp_area_work_around(struct vm_unmapped_area_info *info)
extern struct page *sp_alloc_pages(struct vm_struct *area, gfp_t mask,
unsigned int page_order, int node);
-
-static inline void sp_free_pages(struct page *page, struct vm_struct *area)
-{
- if (PageHuge(page))
- put_page(page);
- else
- __free_pages(page, area->page_order);
-}
-
static inline bool sp_check_vm_share_pool(unsigned long vm_flags)
{
if (enable_ascend_share_pool && (vm_flags & VM_SHARE_POOL))
@@ -264,6 +255,30 @@ extern void *buff_vzalloc_hugepage_user(unsigned long size);
void sp_exit_mm(struct mm_struct *mm);
+static inline bool is_vmalloc_huge(unsigned long vm_flags)
+{
+ if (enable_ascend_share_pool && (vm_flags & VM_HUGE_PAGES))
+ return true;
+
+ return false;
+}
+
+static inline bool is_vmalloc_sharepool(unsigned long vm_flags)
+{
+ if (enable_ascend_share_pool && (vm_flags & VM_SHAREPOOL))
+ return true;
+
+ return false;
+}
+
+static inline void sp_free_pages(struct page *page, struct vm_struct *area)
+{
+ if (PageHuge(page))
+ put_page(page);
+ else
+ __free_pages(page, is_vmalloc_huge(area->flags) ? PMD_SHIFT - PAGE_SHIFT : 0);
+}
+
#else
static inline int sp_group_add_task(int pid, int spg_id)
@@ -400,10 +415,6 @@ static inline struct page *sp_alloc_pages(void *area, gfp_t mask,
return NULL;
}
-static inline void sp_free_pages(struct page *page, struct vm_struct *area)
-{
-}
-
static inline bool sp_check_vm_share_pool(unsigned long vm_flags)
{
return false;
@@ -448,6 +459,20 @@ static inline void *buff_vzalloc_hugepage_user(unsigned long size)
return NULL;
}
+static inline bool is_vmalloc_huge(struct vm_struct *vm)
+{
+ return NULL;
+}
+
+static inline bool is_vmalloc_sharepool(struct vm_struct *vm)
+{
+ return NULL;
+}
+
+static inline void sp_free_pages(struct page *page, struct vm_struct *area)
+{
+}
+
#endif
#endif /* LINUX_SHARE_POOL_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index bb814f6418fd9..298eff5579b21 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -43,7 +43,6 @@ struct vm_struct {
unsigned long size;
unsigned long flags;
struct page **pages;
- unsigned int page_order;
unsigned int nr_pages;
phys_addr_t phys_addr;
const void *caller;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 37b4762871142..8c70131e0b078 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2354,6 +2354,7 @@ struct vm_struct *remove_vm_area(const void *addr)
static void __vunmap(const void *addr, int deallocate_pages)
{
struct vm_struct *area;
+ unsigned int page_order = 0;
if (!addr)
return;
@@ -2369,13 +2370,14 @@ static void __vunmap(const void *addr, int deallocate_pages)
return;
}
-#ifdef CONFIG_ASCEND_SHARE_POOL
/* unmap a sharepool vm area will cause meamleak! */
- if (area->flags & VM_SHAREPOOL) {
+ if (is_vmalloc_sharepool(area->flags)) {
WARN(1, KERN_ERR "Memory leak due to vfree() sharepool vm area (%p) !\n", addr);
return;
}
-#endif
+
+ if (is_vmalloc_huge(area->flags))
+ page_order = PMD_SHIFT - PAGE_SHIFT;
debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
@@ -2384,14 +2386,14 @@ static void __vunmap(const void *addr, int deallocate_pages)
if (deallocate_pages) {
int i;
- for (i = 0; i < area->nr_pages; i += 1U << area->page_order) {
+ for (i = 0; i < area->nr_pages; i += 1U << page_order) {
struct page *page = area->pages[i];
BUG_ON(!page);
if (sp_is_enabled())
sp_free_pages(page, area);
else
- __free_pages(page, area->page_order);
+ __free_pages(page, page_order);
}
kvfree(area->pages);
@@ -2589,7 +2591,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
area->pages = pages;
area->nr_pages = nr_pages;
- area->page_order = page_order;
for (i = 0; i < area->nr_pages; i += 1U << page_order) {
struct page *page;
@@ -2657,27 +2658,17 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!size || (size >> PAGE_SHIFT) > totalram_pages)
goto fail;
- if (vmap_allow_huge && (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))) {
- unsigned long size_per_node;
-
+ if (vmap_allow_huge && (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) && is_vmalloc_huge(vm_flags)) {
/*
- * Try huge pages. Only try for PAGE_KERNEL allocations,
- * others like modules don't yet expect huge pages in
- * their allocations due to apply_to_page_range not
- * supporting them.
+ * Alloc huge pages. Only valid for PAGE_KERNEL allocations and
+ * VM_HUGE_PAGES flags.
*/
- size_per_node = size;
- if (node == NUMA_NO_NODE && !sp_is_enabled())
- size_per_node /= num_online_nodes();
- if (size_per_node >= PMD_SIZE) {
- shift = PMD_SHIFT;
- align = max(real_align, 1UL << shift);
- size = ALIGN(real_size, 1UL << shift);
- }
+ shift = PMD_SHIFT;
+ align = max(real_align, 1UL << shift);
+ size = ALIGN(real_size, 1UL << shift);
}
-again:
size = PAGE_ALIGN(size);
area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
vm_flags, start, end, node, gfp_mask, caller);
@@ -2706,12 +2697,6 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
return addr;
fail:
- if (shift > PAGE_SHIFT) {
- shift = PAGE_SHIFT;
- align = real_align;
- size = real_size;
- goto again;
- }
if (!area) {
/* Warn for area allocation, page allocations already warn */
@@ -3776,7 +3761,7 @@ static int s_show(struct seq_file *m, void *p)
seq_printf(m, " %pS", v->caller);
if (v->nr_pages)
- seq_printf(m, " pages=%d order=%d", v->nr_pages, v->page_order);
+ seq_printf(m, " pages=%d", v->nr_pages);
if (v->phys_addr)
seq_printf(m, " phys=%pa", &v->phys_addr);
@@ -3796,8 +3781,8 @@ static int s_show(struct seq_file *m, void *p)
if (is_vmalloc_addr(v->pages))
seq_puts(m, " vpages");
- if (sp_is_enabled())
- seq_printf(m, " order=%d", v->page_order);
+ if (is_vmalloc_huge(v->flags))
+ seq_printf(m, " order=%d", PMD_SHIFT - PAGE_SHIFT);
show_numa_info(m, v);
seq_putc(m, '\n');
--
2.25.1
1
31
openEuler inclusion
category: bugfix
bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=6
CVE: NA
When performing the out-of-tree kernel 4.19 building, some error
happened saying: cannot touch 'certs/pubring.gpg'. The detail can be
checked in Gitee issue #I39160 or Bug 6 of openeuler bugzilla.
This patch will fix that build error.
Signed-off-by: Zhichang Yuan <erik.yuan(a)arm.com>
Signed-off-by: Steve Capper <Steve.Capper(a)arm.com>
---
certs/Makefile | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/certs/Makefile b/certs/Makefile
index 5053e3c86c97..5e56f9405758 100644
--- a/certs/Makefile
+++ b/certs/Makefile
@@ -5,11 +5,12 @@
obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
ifdef CONFIG_PGP_PRELOAD_PUBLIC_KEYS
-ifneq ($(shell ls certs/pubring.gpg 2> /dev/null), certs/pubring.gpg)
-$(shell touch certs/pubring.gpg)
-endif
-$(obj)/system_certificates.o: certs/pubring.gpg
+$(obj)/system_certificates.o: $(obj)/pubring.gpg
+
+$(obj)/pubring.gpg:
+ $(Q)touch $@
endif
+
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o
ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),"")
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o
--
2.23.0
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
4
6
Alain Volmat (1):
i2c: stm32f7: fix configuration of the digital filter
Alexandre Belloni (1):
ARM: dts: lpc32xx: Revert set default clock rate of HCLK PLL
Alexandre Ghiti (1):
riscv: virt_addr_valid must check the address belongs to linear
mapping
Amir Goldstein (1):
ovl: skip getxattr of security labels
Arun Easi (1):
scsi: qla2xxx: Fix crash during driver load on big endian machines
Borislav Petkov (1):
x86/build: Disable CET instrumentation in the kernel for 32-bit too
Bui Quang Minh (1):
bpf: Check for integer overflow when using roundup_pow_of_two()
Edwin Peer (1):
net: watchdog: hold device global xmit lock during tx disable
Fangrui Song (1):
firmware_loader: align .builtin_fw to 8
Felipe Balbi (1):
usb: dwc3: ulpi: fix checkpatch warning
Florian Westphal (1):
netfilter: conntrack: skip identical origin tuple in same zone only
Greg Kroah-Hartman (1):
Linux 4.19.177
Hans de Goede (1):
platform/x86: hp-wmi: Disable tablet-mode reporting by default
Jan Beulich (8):
Xen/x86: don't bail early from clear_foreign_p2m_mapping()
Xen/x86: also check kernel mapping in set_foreign_p2m_mapping()
Xen/gntdev: correct dev_bus_addr handling in gntdev_map_grant_pages()
Xen/gntdev: correct error checking in gntdev_map_grant_pages()
xen-blkback: don't "handle" error by BUG()
xen-netback: don't "handle" error by BUG()
xen-scsiback: don't "handle" error by BUG()
xen-blkback: fix error handling in xen_blkbk_map()
Jozsef Kadlecsik (1):
netfilter: xt_recent: Fix attempt to update deleted entry
Juergen Gross (1):
xen/netback: avoid race in xenvif_rx_ring_slots_available()
Julien Grall (1):
arm/xen: Don't probe xenbus as part of an early initcall
Lai Jiangshan (1):
kvm: check tlbs_dirty directly
Lin Feng (1):
bfq-iosched: Revert "bfq: Fix computation of shallow depth"
Loic Poulain (1):
net: qrtr: Fix port ID for control messages
Lorenzo Bianconi (1):
mt76: dma: fix a possible memory leak in mt76_add_fragment()
Marc Zyngier (1):
arm64: dts: rockchip: Fix PCIe DT properties on rk3399
Miklos Szeredi (3):
ovl: perform vfs_getxattr() with mounter creds
cap: fix conversions on getxattr
ovl: expand warning in ovl_d_real()
Mohammad Athari Bin Ismail (1):
net: stmmac: set TxQ mode back to DCB after disabling CBS
NeilBrown (1):
net: fix iteration for sctp transport seq_files
Norbert Slusarek (1):
net/vmw_vsock: improve locking in vsock_connect_timeout()
Paolo Bonzini (1):
KVM: SEV: fix double locking due to incorrect backport
Randy Dunlap (1):
h8300: fix PREEMPTION build, TI_PRE_COUNT undefined
Russell King (2):
ARM: ensure the signal page contains defined contents
ARM: kexec: fix oops after TLB are invalidated
Sabyrzhan Tasbolatov (2):
net/rds: restrict iovecs length for RDS_CMSG_RDMA_ARGS
net/qrtr: restrict user-controlled length in qrtr_tun_write_iter()
Serge Semin (1):
usb: dwc3: ulpi: Replace CPU-based busyloop with Protocol-based one
Stefano Garzarella (2):
vsock/virtio: update credit only if socket is not closed
vsock: fix locking in vsock_shutdown()
Stefano Stabellini (1):
xen/arm: don't ignore return errors from set_phys_to_machine
Steven Rostedt (VMware) (2):
tracing: Do not count ftrace events in top level enable output
tracing: Check length before giving out the filter buffer
Sven Auhagen (1):
netfilter: flowtable: fix tcp and udp header checksum update
Victor Lu (2):
drm/amd/display: Fix dc_sink kref count in emulated_link_detect
drm/amd/display: Free atomic state after drm_atomic_commit
Makefile | 2 +-
arch/arm/boot/dts/lpc32xx.dtsi | 3 -
arch/arm/include/asm/kexec-internal.h | 12 ++++
arch/arm/kernel/asm-offsets.c | 5 ++
arch/arm/kernel/machine_kexec.c | 20 +++---
arch/arm/kernel/relocate_kernel.S | 38 +++--------
arch/arm/kernel/signal.c | 14 ++--
arch/arm/xen/enlighten.c | 2 -
arch/arm/xen/p2m.c | 6 +-
arch/arm64/boot/dts/rockchip/rk3399.dtsi | 2 +-
arch/h8300/kernel/asm-offsets.c | 3 +
arch/riscv/include/asm/page.h | 5 +-
arch/x86/Makefile | 6 +-
arch/x86/kvm/svm.c | 1 -
arch/x86/xen/p2m.c | 15 ++---
block/bfq-iosched.c | 8 +--
drivers/block/xen-blkback/blkback.c | 30 +++++----
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 18 +++--
drivers/i2c/busses/i2c-stm32f7.c | 11 ++-
.../net/ethernet/stmicro/stmmac/stmmac_tc.c | 7 +-
drivers/net/wireless/mediatek/mt76/dma.c | 8 ++-
drivers/net/xen-netback/netback.c | 4 +-
drivers/net/xen-netback/rx.c | 9 ++-
drivers/platform/x86/hp-wmi.c | 14 ++--
drivers/scsi/qla2xxx/qla_tmpl.c | 9 +--
drivers/scsi/qla2xxx/qla_tmpl.h | 2 +-
drivers/usb/dwc3/ulpi.c | 20 ++++--
drivers/xen/gntdev.c | 37 +++++-----
drivers/xen/xen-scsiback.c | 4 +-
drivers/xen/xenbus/xenbus.h | 1 -
drivers/xen/xenbus/xenbus_probe.c | 2 +-
fs/overlayfs/copy_up.c | 15 +++--
fs/overlayfs/inode.c | 2 +
fs/overlayfs/super.c | 13 ++--
include/asm-generic/vmlinux.lds.h | 2 +-
include/linux/netdevice.h | 2 +
include/xen/grant_table.h | 1 +
include/xen/xenbus.h | 2 -
kernel/bpf/stackmap.c | 2 +
kernel/trace/trace.c | 2 +-
kernel/trace/trace_events.c | 3 +-
net/netfilter/nf_conntrack_core.c | 3 +-
net/netfilter/nf_flow_table_core.c | 4 +-
net/netfilter/xt_recent.c | 12 +++-
net/qrtr/qrtr.c | 2 +-
net/qrtr/tun.c | 6 ++
net/rds/rdma.c | 3 +
net/sctp/proc.c | 16 +++--
net/vmw_vsock/af_vsock.c | 13 ++--
net/vmw_vsock/hyperv_transport.c | 4 --
net/vmw_vsock/virtio_transport_common.c | 4 +-
security/commoncap.c | 67 ++++++++++++-------
virt/kvm/kvm_main.c | 3 +-
53 files changed, 295 insertions(+), 204 deletions(-)
create mode 100644 arch/arm/include/asm/kexec-internal.h
--
2.25.1
1
50
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 34278
CVE: NA
-------------------------------------------------
Set dentry before goto error handling branch.
fs/resctrlfs.c: In function ‘resctrl_mount’:
fs/resctrlfs.c:419:9: warning: ‘dentry’ may be used uninitialized in this function [-Wmaybe-uninitialized]
return dentry;
^~~~~~
Fixes: 1b70f98921b10 ("arm64/mpam: resctrl: Use resctrl_group_init_alloc() for default group")
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/resctrlfs.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/resctrlfs.c b/fs/resctrlfs.c
index 190ad509714d5..df47b105609b9 100644
--- a/fs/resctrlfs.c
+++ b/fs/resctrlfs.c
@@ -351,8 +351,10 @@ static struct dentry *resctrl_mount(struct file_system_type *fs_type,
}
ret = resctrl_group_init_alloc(&resctrl_group_default);
- if (ret < 0)
+ if (ret < 0) {
+ dentry = ERR_PTR(ret);
goto out_schema;
+ }
ret = resctrl_group_create_info_dir(resctrl_group_default.kn, &kn_info);
if (ret) {
--
2.25.1
1
0
Cong Wang (1):
af_key: relax availability checks for skb size calculation
Dave Wysochanski (2):
SUNRPC: Move simple_get_bytes and simple_get_netobj into private
header
SUNRPC: Handle 0 length opaque XDR object data properly
David Collins (1):
regulator: core: avoid regulator_resolve_supply() race condition
Douglas Anderson (1):
regulator: core: Clean enabling always-on regulators + their supplies
Emmanuel Grumbach (1):
iwlwifi: pcie: add a NULL check in iwl_pcie_txq_unmap
Greg Kroah-Hartman (1):
Linux 4.19.176
Johannes Berg (3):
iwlwifi: mvm: take mutex for calling iwl_mvm_get_sync_time()
iwlwifi: pcie: fix context info memory leak
iwlwifi: mvm: guard against device removal in reprobe
Mark Brown (1):
regulator: Fix lockdep warning resolving supplies
Masami Hiramatsu (1):
tracing/kprobe: Fix to support kretprobe events on unloaded modules
Ming Lei (2):
block: don't hold q->sysfs_lock in elevator_init_mq
blk-mq: don't hold q->sysfs_lock in blk_mq_map_swqueue
Olliver Schinagl (1):
regulator: core: enable power when setting up constraints
Pan Bian (1):
chtls: Fix potential resource leak
Peter Gonda (1):
Fix unsynchronized access to sev members through
svm_register_enc_region
Phillip Lougher (3):
squashfs: add more sanity checks in id lookup
squashfs: add more sanity checks in inode lookup
squashfs: add more sanity checks in xattr id lookup
Sibi Sankar (2):
remoteproc: qcom_q6v5_mss: Validate modem blob firmware size before
load
remoteproc: qcom_q6v5_mss: Validate MBA firmware size before load
Steven Rostedt (VMware) (1):
fgraph: Initialize tracing_graph_pause at task creation
Tobin C. Harding (1):
lib/string: Add strscpy_pad() function
Trond Myklebust (1):
pNFS/NFSv4: Try to return invalid layout in pnfs_layout_process()
Makefile | 2 +-
arch/x86/kvm/svm.c | 18 ++--
block/blk-mq.c | 7 --
block/elevator.c | 14 ++--
drivers/crypto/chelsio/chtls/chtls_cm.c | 7 +-
.../wireless/intel/iwlwifi/mvm/debugfs-vif.c | 3 +
drivers/net/wireless/intel/iwlwifi/mvm/ops.c | 3 +-
.../intel/iwlwifi/pcie/ctxt-info-gen3.c | 11 ++-
drivers/net/wireless/intel/iwlwifi/pcie/tx.c | 5 ++
drivers/regulator/core.c | 84 +++++++++++++------
drivers/remoteproc/qcom_q6v5_pil.c | 11 ++-
fs/nfs/pnfs.c | 8 +-
fs/squashfs/export.c | 41 +++++++--
fs/squashfs/id.c | 40 +++++++--
fs/squashfs/squashfs_fs_sb.h | 1 +
fs/squashfs/super.c | 6 +-
fs/squashfs/xattr.h | 10 ++-
fs/squashfs/xattr_id.c | 66 +++++++++++++--
include/linux/kprobes.h | 2 +-
include/linux/string.h | 4 +
include/linux/sunrpc/xdr.h | 3 +-
init/init_task.c | 3 +-
kernel/kprobes.c | 34 ++++++--
kernel/trace/ftrace.c | 2 -
kernel/trace/trace_kprobe.c | 4 +-
lib/string.c | 47 +++++++++--
net/key/af_key.c | 6 +-
net/sunrpc/auth_gss/auth_gss.c | 30 +------
net/sunrpc/auth_gss/auth_gss_internal.h | 45 ++++++++++
net/sunrpc/auth_gss/gss_krb5_mech.c | 31 +------
30 files changed, 375 insertions(+), 173 deletions(-)
create mode 100644 net/sunrpc/auth_gss/auth_gss_internal.h
--
2.25.1
1
25
From: Chenguangli <chenguangli2(a)huawei.com>
driver inclusion
category: feature
bugzilla: NA
-----------------------------------------------------------------------
This module is used to register the hifc driver FC capability to the
SCSI layer.
Signed-off-by: Chenguangli <chenguangli2(a)huawei.com>
Reviewed-by: Zengweiliang <zengweiliang.zengweiliang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/huawei/Kconfig | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/huawei/Kconfig b/drivers/scsi/huawei/Kconfig
index a9fbdef9b4b38..1fe47e626f568 100644
--- a/drivers/scsi/huawei/Kconfig
+++ b/drivers/scsi/huawei/Kconfig
@@ -3,9 +3,10 @@
#
config SCSI_HUAWEI_FC
- tristate "Huawei devices"
+ tristate "Huawei Fibre Channel Adapter"
depends on PCI && SCSI
depends on SCSI_FC_ATTRS
+ depends on ARM64 || X86_64
default m
---help---
If you have a Fibre Channel PCI card belonging to this class, say Y.
--
2.25.1
1
4
Alexey Dobriyan (1):
Input: i8042 - unbreak Pegatron C15B
Arnd Bergmann (1):
elfcore: fix building with clang
Aurelien Aptel (1):
cifs: report error instead of invalid when revalidating a dentry fails
Benjamin Valentin (1):
Input: xpad - sync supported devices with fork on GitHub
Chenxin Jin (1):
USB: serial: cp210x: add new VID/PID for supporting Teraoka AD2000
Christoph Schemmel (1):
USB: serial: option: Adding support for Cinterion MV31
DENG Qingfang (1):
net: dsa: mv88e6xxx: override existent unicast portvec in port_fdb_add
Dan Carpenter (1):
USB: gadget: legacy: fix an error code in eth_bind()
Dave Hansen (1):
x86/apic: Add extra serialization for non-serializing MSRs
David Howells (1):
rxrpc: Fix deadlock around release of dst cached on udp tunnel
Felix Fietkau (1):
mac80211: fix station rate table updates on assoc
Fengnan Chang (1):
mmc: core: Limit retries when analyse of SDIO tuples fails
Gary Bisson (1):
usb: dwc3: fix clock issue during resume in OTG mode
Greg Kroah-Hartman (1):
Linux 4.19.175
Gustavo A. R. Silva (1):
smb3: Fix out-of-bounds bug in SMB2_negotiate()
Heiko Stuebner (1):
usb: dwc2: Fix endpoint direction check in ep_from_windex
Hugh Dickins (1):
mm: thp: fix MADV_REMOVE deadlock on shmem THP
Jeremy Figgins (1):
USB: usblp: don't call usb_set_interface if there's a single alt
Josh Poimboeuf (1):
x86/build: Disable CET instrumentation in the kernel
Liangyan (1):
ovl: fix dentry leak in ovl_get_redirect
Marc Zyngier (1):
genirq/msi: Activate Multi-MSI early when MSI_FLAG_ACTIVATE_EARLY is
set
Mathias Nyman (1):
xhci: fix bounce buffer usage for non-sg list case
Muchun Song (4):
mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page
mm: hugetlb: fix a race between freeing and dissolving the page
mm: hugetlb: fix a race between isolating and freeing page
mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active
Nadav Amit (1):
iommu/vt-d: Do not use flush-queue when caching-mode is on
Pho Tran (1):
USB: serial: cp210x: add pid/vid for WSDA-200-USB
Roman Gushchin (1):
memblock: do not start bottom-up allocations with kernel_end
Russell King (1):
ARM: footbridge: fix dc21285 PCI configuration accessors
Sean Christopherson (1):
KVM: SVM: Treat SVM as unsupported when running as an SEV guest
Stefan Chulski (1):
net: mvpp2: TCAM entry enable should be written after SRAM data
Thorsten Leemhuis (1):
nvme-pci: avoid the deepest sleep state on Kingston A2000 SSDs
Vadim Fedorenko (1):
net: ip_tunnel: fix mtu calculation
Wang ShaoBo (1):
kretprobe: Avoid re-registration of the same kretprobe earlier
Xiao Ni (1):
md: Set prev_flush_start and flush_bio in an atomic way
Xie He (1):
net: lapb: Copy the skb before sending a packet
Yoshihiro Shimoda (1):
usb: renesas_usbhs: Clear pipe running flag in usbhs_pkt_pop()
Zyta Szpak (1):
arm64: dts: ls1046a: fix dcfg address range
Makefile | 8 +--
arch/arm/mach-footbridge/dc21285.c | 12 ++---
.../arm64/boot/dts/freescale/fsl-ls1046a.dtsi | 2 +-
arch/x86/Makefile | 3 ++
arch/x86/include/asm/apic.h | 10 ----
arch/x86/include/asm/barrier.h | 18 +++++++
arch/x86/kernel/apic/apic.c | 4 ++
arch/x86/kernel/apic/x2apic_cluster.c | 6 ++-
arch/x86/kernel/apic/x2apic_phys.c | 6 ++-
arch/x86/kvm/svm.c | 5 ++
drivers/input/joystick/xpad.c | 17 ++++++-
drivers/input/serio/i8042-x86ia64io.h | 2 +
drivers/iommu/intel-iommu.c | 6 +++
drivers/md/md.c | 2 +
drivers/mmc/core/sdio_cis.c | 6 +++
drivers/net/dsa/mv88e6xxx/chip.c | 6 ++-
.../net/ethernet/marvell/mvpp2/mvpp2_prs.c | 10 ++--
drivers/nvme/host/pci.c | 2 +
drivers/usb/class/usblp.c | 19 ++++---
drivers/usb/dwc2/gadget.c | 8 +--
drivers/usb/dwc3/core.c | 2 +-
drivers/usb/gadget/legacy/ether.c | 4 +-
drivers/usb/host/xhci-ring.c | 31 +++++++-----
drivers/usb/renesas_usbhs/fifo.c | 1 +
drivers/usb/serial/cp210x.c | 2 +
drivers/usb/serial/option.c | 6 +++
fs/afs/main.c | 6 +--
fs/cifs/dir.c | 22 ++++++++-
fs/cifs/smb2pdu.h | 2 +-
fs/hugetlbfs/inode.c | 3 +-
fs/overlayfs/dir.c | 2 +-
include/linux/elfcore.h | 22 +++++++++
include/linux/hugetlb.h | 3 ++
include/linux/msi.h | 6 +++
kernel/Makefile | 1 -
kernel/elfcore.c | 26 ----------
kernel/irq/msi.c | 44 ++++++++---------
kernel/kprobes.c | 4 ++
mm/huge_memory.c | 37 ++++++++------
mm/hugetlb.c | 48 ++++++++++++++++--
mm/memblock.c | 49 +++----------------
net/ipv4/ip_tunnel.c | 16 +++---
net/lapb/lapb_out.c | 3 +-
net/mac80211/driver-ops.c | 5 +-
net/mac80211/rate.c | 3 +-
net/rxrpc/af_rxrpc.c | 6 +--
46 files changed, 307 insertions(+), 199 deletions(-)
delete mode 100644 kernel/elfcore.c
--
2.25.1
1
39

Re: [PATCH] kbuild fix: keep the original function in Makefile.lib for non-RPi
by Zheng Zengkai 08 Mar '21
by Zheng Zengkai 08 Mar '21
08 Mar '21
Hi Yafen,
> ---------------------- [WARNING] checkpatch ----------------------
>
> total: 0 errors, 1 warnings, 27 lines checked
>
> WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
> #84:
> /bin/sh: -c: line 0: `set -e; echo ' DTCO arch/arm64/boot/dts/overlays/act-led.dtbo'; mkdir -p arch/arm64/boot/dts/overlays/ ; gcc -E -Wp,-MMD,arch/arm64/boot/dts/overlays/.act-led.dtbo.d.pre.tmp -nostdinc -I./scripts/dtc/include-prefixes -undef -D__DTS__ -x assembler-with-cpp -o arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp arch/arm64/boot/dts/overlays/act-led-overlay.dts ; ./scripts/dtc/dtc -@ -H epapr -O dtb -o arch/arm64/boot/dts/overlays/act-led.dtbo -b 0 -i arch/arm64/boot/dts/overlays/ -Wno-interrupt_provider -Wno-unit_address_vs_reg -Wno-unit_address_format -Wno-gpios_property -Wno-avoid_unnecessary_addr_size -Wno-alias_paths -Wno-graph_child_address -Wno-simple_bus_reg -Wno-unique_unit_address -Wno-pci_device_reg -Wno-interrupts_property ifeq (y,y) -Wno-label_is_string -Wno-reg_format -Wno-pci_device_bus_num -Wno-i2c_bus_reg -Wno-spi_bus_reg -Wno-avoid_default_addr_size endif -d arch/arm64/boot/dts/overlays/.act-led.dtbo.d.dtc.tmp arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp ; cat ...; rm -f arch/arm64/boot/dts/overlays/.act-led.dtbo.d'
>
> total: 0 errors, 1 warnings, 27 lines checked
>
> NOTE: For some of the reported defects, checkpatch may be able to
> mechanically convert to the typical style using --fix or --fix-inplace.
>
> kbuild-fix-keep-the-original-function-in-Makefile.lib-for-non-RPi.patch has style problems, please review.
This patch has style problems, please review.
Thanks!
2
1

[PATCH] kbuild fix: keep the original function in Makefile.lib for non-RPi
by fangyafenqidai@163.com 08 Mar '21
by fangyafenqidai@163.com 08 Mar '21
08 Mar '21
From: Fang Yafen <yafen(a)iscas.ac.cn>
Fix the following compile error when using bcm2711_defconfig (for RPi).
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `set -e; echo ' DTCO arch/arm64/boot/dts/overlays/act-led.dtbo'; mkdir -p arch/arm64/boot/dts/overlays/ ; gcc -E -Wp,-MMD,arch/arm64/boot/dts/overlays/.act-led.dtbo.d.pre.tmp -nostdinc -I./scripts/dtc/include-prefixes -undef -D__DTS__ -x assembler-with-cpp -o arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp arch/arm64/boot/dts/overlays/act-led-overlay.dts ; ./scripts/dtc/dtc -@ -H epapr -O dtb -o arch/arm64/boot/dts/overlays/act-led.dtbo -b 0 -i arch/arm64/boot/dts/overlays/ -Wno-interrupt_provider -Wno-unit_address_vs_reg -Wno-unit_address_format -Wno-gpios_property -Wno-avoid_unnecessary_addr_size -Wno-alias_paths -Wno-graph_child_address -Wno-simple_bus_reg -Wno-unique_unit_address -Wno-pci_device_reg -Wno-interrupts_property ifeq (y,y) -Wno-label_is_string -Wno-reg_format -Wno-pci_device_bus_num -Wno-i2c_bus_reg -Wno-spi_bus_reg -Wno-avoid_default_addr_size endif -d arch/arm64/boot/dts/overlays/.act-led.dtbo.d.dtc.tmp arch/arm64/boot/dts/overlays/.act-led.dtbo.dts.tmp ; cat ...; rm -f arch/arm64/boot/dts/overlays/.act-led.dtbo.d'
make[2]: *** [scripts/Makefile.lib;363: arch/arm64/boot/dts/overlays/act-led.dtbo] Error 1
make[2]: *** Waiting for unfinished jobs....
Related patches:
ffa2d13ccc3 BCM2708: Add core Device Tree support
4894352ec98 kbuild: Silence unavoidable dtc overlay warnings
a4a4d07f0cf kbuild: keep the original function for non-RPi
Signed-off-by: Fang Yafen <yafen(a)iscas.ac.cn>
---
scripts/Makefile.lib | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index c23e3ae7ef40..a0e0e2543165 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -342,25 +342,25 @@ endef
$(obj)/%.dt.yaml: $(src)/%.dts $(DTC) $(DT_TMP_SCHEMA) FORCE
$(call if_changed_rule,dtc,yaml)
+ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y)
quiet_cmd_dtco = DTCO $@
cmd_dtco = mkdir -p $(dir ${dtc-tmp}) ; \
$(CPP) $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ; \
$(DTC) -@ -H epapr -O dtb -o $@ -b 0 \
-i $(dir $<) $(DTC_FLAGS) \
-Wno-interrupts_property \
-ifeq ($(CONFIG_OPENEULER_RASPBERRYPI),y) \
-Wno-label_is_string \
-Wno-reg_format \
-Wno-pci_device_bus_num \
-Wno-i2c_bus_reg \
-Wno-spi_bus_reg \
-Wno-avoid_default_addr_size \
-endif \
-d $(depfile).dtc.tmp $(dtc-tmp) ; \
cat $(depfile).pre.tmp $(depfile).dtc.tmp > $(depfile)
$(obj)/%.dtbo: $(src)/%-overlay.dts FORCE
$(call if_changed_dep,dtco)
+endif
dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp)
--
2.23.0
1
0

[PATCH openEuler-21.03 v3 1/2] arm64: Add memmap parameter and register pmem
by zhuling8@huawei.com 08 Mar '21
by zhuling8@huawei.com 08 Mar '21
08 Mar '21
From: ZhuLing <zhuling8(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: NA
Register pmem in arm64:
Use memmap(memmap=nn[KMG]!ss[KMG]) reserve memory and
e820(driver/nvdimm/e820.c) function to register persistent
memory in arm64. when the kernel restart or update, the data
in PMEM will not be lost and can be loaded faster. this is a
general features.
driver/nvdimm/e820.c:
The function of this file is scan "iomem_resource" and take
advantage of nvdimm resource discovery mechanism by registering
a resource named "Persistent Memory (legacy)", this function
doesn't depend on architecture.
We will push the feature to linux kernel community and discuss to
modify the file name. because people have a mistaken notion that
the e820.c is depend on x86.
If you want use this features, you need do as follows:
1.Reserve memory: add memmap to reserve memory in grub.cfg
memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000.
2.Insmod nd_e820.ko: modprobe nd_e820.
3.Check pmem device in /dev exp: /dev/pmem0
Signed-off-by: ZhuLing <zhuling8(a)huawei.com>
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
Acked-by: Hanjun Guo <guohanjun(a)huawei.com>
---
arch/arm64/Kconfig | 21 ++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pmem.c | 35 +++++++++++++++++
arch/arm64/kernel/setup.c | 10 +++++
arch/arm64/mm/init.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Kconfig | 5 +++
drivers/nvdimm/Makefile | 2 +-
7 files changed, 170 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/pmem.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c451137..326f26d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -360,6 +360,27 @@ config ARM64_CPU_PARK
config ARCH_HAS_CPU_RELAX
def_bool y
+config ARM64_PMEM_RESERVE
+ bool "Reserve memory for persistent storage"
+ default n
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
+ memory for persistent storage.
+
+ Say y here to enable this feature.
+
+config ARM64_PMEM_LEGACY_DEVICE
+ bool "Create persistent storage"
+ depends on BLK_DEV
+ depends on LIBNVDIMM
+ select ARM64_PMEM_RESERVE
+ help
+ Use reserved memory for persistent storage when the kernel
+ restart or update. the data in PMEM will not be lost and
+ can be loaded faster.
+
+ Say y if unsure.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 967cb3c..be996f3 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-$(CONFIG_MPAM) += mpam/
+obj-$(CONFIG_ARM64_PMEM_LEGACY_DEVICE) += pmem.o
obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
diff --git a/arch/arm64/kernel/pmem.c b/arch/arm64/kernel/pmem.c
new file mode 100644
index 0000000..16eaf70
--- /dev/null
+++ b/arch/arm64/kernel/pmem.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Derived from x86 and arm64 implement PMEM.
+ */
+#include <linux/platform_device.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+
+static int found(struct resource *res, void *data)
+{
+ return 1;
+}
+
+static int __init register_e820_pmem(void)
+{
+ struct platform_device *pdev;
+ int rc;
+
+ rc = walk_iomem_res_desc(IORES_DESC_PERSISTENT_MEMORY_LEGACY,
+ IORESOURCE_MEM, 0, -1, NULL, found);
+ if (rc <= 0)
+ return 0;
+
+ /*
+ * See drivers/nvdimm/e820.c for the implementation, this is
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+device_initcall(register_e820_pmem);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 5e282d3..84c71c8 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -57,6 +57,10 @@
static int num_standard_resources;
static struct resource *standard_resources;
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+extern struct resource pmem_res;
+#endif
+
phys_addr_t __fdt_pointer __initdata;
/*
@@ -270,6 +274,12 @@ static void __init request_standard_resources(void)
request_resource(res, &pin_memory_resource);
#endif
}
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ if (pmem_res.end && pmem_res.start)
+ request_resource(&iomem_resource, &pmem_res);
+#endif
+
}
static int __init reserve_memblock_reserved_regions(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b343744..f22faea 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,6 +66,18 @@ EXPORT_SYMBOL(memstart_addr);
phys_addr_t arm64_dma_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+static unsigned long long pmem_size, pmem_start;
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+struct resource pmem_res = {
+ .name = "Persistent Memory (legacy)",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_PERSISTENT_MEMORY_LEGACY
+};
+#endif
+
#ifndef CONFIG_KEXEC_CORE
static void __init reserve_crashkernel(void)
{
@@ -378,6 +390,87 @@ static int __init reserve_park_mem(void)
}
#endif
+static int __init is_mem_valid(unsigned long long mem_size, unsigned long long mem_start)
+{
+ if (!memblock_is_region_memory(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region is not memory!\n");
+ return -EINVAL;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region overlaps reserved memory!\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve mem: base address is not 2MB aligned!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init parse_memmap_one(char *p)
+{
+ char *oldp;
+ phys_addr_t start_at, mem_size;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ if (!mem_size)
+ return -EINVAL;
+
+ mem_size = PAGE_ALIGN(mem_size);
+
+ if (*p == '!') {
+ start_at = memparse(p+1, &p);
+
+ if (is_mem_valid(mem_size, start_at) != 0)
+ return -EINVAL;
+
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ start_at, start_at + mem_size, mem_size >> 20);
+ pmem_start = start_at;
+ pmem_size = mem_size;
+ } else
+ pr_info("Unrecognized memmap option, please check the parameter.\n");
+
+ return *p == '\0' ? 0 : -EINVAL;
+}
+
+static int __init parse_memmap_opt(char *str)
+{
+ while (str) {
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ parse_memmap_one(str);
+ str = k;
+ }
+
+ return 0;
+}
+early_param("memmap", parse_memmap_opt);
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+static void __init reserve_pmem(void)
+{
+ memblock_remove(pmem_start, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_start, pmem_start + pmem_size, pmem_size >> 20);
+ pmem_res.start = pmem_start;
+ pmem_res.end = pmem_start + pmem_size - 1;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -511,6 +604,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig
index b7d1eb3..ce4de75 100644
--- a/drivers/nvdimm/Kconfig
+++ b/drivers/nvdimm/Kconfig
@@ -132,3 +132,8 @@ config NVDIMM_TEST_BUILD
infrastructure.
endif
+
+config PMEM_LEGACY
+ tristate "Pmem_legacy"
+ select X86_PMEM_LEGACY if X86
+ select ARM64_PMEM_LEGACY_DEVICE if ARM64
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index 29203f3..6f8dc92 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_LIBNVDIMM) += libnvdimm.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
-obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
+obj-$(CONFIG_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o nd_virtio.o
--
2.9.5
1
1

08 Mar '21
From: ZhuLing <zhuling8(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: NA
Register pmem in arm64:
Use memmap(memmap=nn[KMG]!ss[KMG]) reserve memory and
e820(driver/nvdimm/e820.c) function to register persistent
memory in arm64. when the kernel restart or update, the data
in PMEM will not be lost and can be loaded faster. this is a
general features.
driver/nvdimm/e820.c:
The function of this file is scan "iomem_resource" and take
advantage of nvdimm resource discovery mechanism by registering
a resource named "Persistent Memory (legacy)", this function
doesn't depend on architecture.
We will push the feature to linux kernel community and discuss to
modify the file name. because people have a mistaken notion that
the e820.c is depend on x86.
If you want use this features, you need do as follows:
1.Reserve memory: add memmap to reserve memory in grub.cfg
memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000.
2.Insmod nd_e820.ko: modprobe nd_e820.
3.Check pmem device in /dev exp: /dev/pmem0
Signed-off-by: ZhuLing <zhuling8(a)huawei.com>
---
arch/arm64/Kconfig | 21 +++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pmem.c | 35 ++++++++++++++
arch/arm64/kernel/setup.c | 10 ++++
arch/arm64/mm/init.c | 97 ++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Kconfig | 5 ++
drivers/nvdimm/Makefile | 2 +-
7 files changed, 170 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/pmem.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c451137ab..326f26d40 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -360,6 +360,27 @@ config ARM64_CPU_PARK
config ARCH_HAS_CPU_RELAX
def_bool y
+config ARM64_PMEM_RESERVE
+ bool "Reserve memory for persistent storage"
+ default n
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve
+ memory for persistent storage.
+
+ Say y here to enable this feature.
+
+config ARM64_PMEM_LEGACY_DEVICE
+ bool "Create persistent storage"
+ depends on BLK_DEV
+ depends on LIBNVDIMM
+ select ARM64_PMEM_RESERVE
+ help
+ Use reserved memory for persistent storage when the kernel
+ restart or update. the data in PMEM will not be lost and
+ can be loaded faster.
+
+ Say y if unsure.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 967cb3c6d..be996f3c1 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -67,6 +67,7 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-$(CONFIG_MPAM) += mpam/
+obj-$(CONFIG_ARM64_PMEM_LEGACY_DEVICE) += pmem.o
obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
diff --git a/arch/arm64/kernel/pmem.c b/arch/arm64/kernel/pmem.c
new file mode 100644
index 000000000..16eaf706f
--- /dev/null
+++ b/arch/arm64/kernel/pmem.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Derived from x86 and arm64 implement PMEM.
+ */
+#include <linux/platform_device.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+
+static int found(struct resource *res, void *data)
+{
+ return 1;
+}
+
+static int __init register_e820_pmem(void)
+{
+ struct platform_device *pdev;
+ int rc;
+
+ rc = walk_iomem_res_desc(IORES_DESC_PERSISTENT_MEMORY_LEGACY,
+ IORESOURCE_MEM, 0, -1, NULL, found);
+ if (rc <= 0)
+ return 0;
+
+ /*
+ * See drivers/nvdimm/e820.c for the implementation, this is
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+device_initcall(register_e820_pmem);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 5e282d31a..84c71c88d 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -57,6 +57,10 @@
static int num_standard_resources;
static struct resource *standard_resources;
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+extern struct resource pmem_res;
+#endif
+
phys_addr_t __fdt_pointer __initdata;
/*
@@ -270,6 +274,12 @@ static void __init request_standard_resources(void)
request_resource(res, &pin_memory_resource);
#endif
}
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ if (pmem_res.end && pmem_res.start)
+ request_resource(&iomem_resource, &pmem_res);
+#endif
+
}
static int __init reserve_memblock_reserved_regions(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b3437440d..f22faea1a 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -66,6 +66,18 @@ EXPORT_SYMBOL(memstart_addr);
phys_addr_t arm64_dma_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+static unsigned long long pmem_size, pmem_start;
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+struct resource pmem_res = {
+ .name = "Persistent Memory (legacy)",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_PERSISTENT_MEMORY_LEGACY
+};
+#endif
+
#ifndef CONFIG_KEXEC_CORE
static void __init reserve_crashkernel(void)
{
@@ -378,6 +390,87 @@ static int __init reserve_park_mem(void)
}
#endif
+static int __init is_mem_valid(unsigned long long mem_size, unsigned long long mem_start)
+{
+ if (!memblock_is_region_memory(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region is not memory!\n");
+ return -EINVAL;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region overlaps reserved memory!\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve mem: base address is not 2MB aligned!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init parse_memmap_one(char *p)
+{
+ char *oldp;
+ phys_addr_t start_at, mem_size;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ if (!mem_size)
+ return -EINVAL;
+
+ mem_size = PAGE_ALIGN(mem_size);
+
+ if (*p == '!') {
+ start_at = memparse(p+1, &p);
+
+ if (is_mem_valid(mem_size, start_at) != 0)
+ return -EINVAL;
+
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ start_at, start_at + mem_size, mem_size >> 20);
+ pmem_start = start_at;
+ pmem_size = mem_size;
+ } else
+ pr_info("Unrecognized memmap option, please check the parameter.\n");
+
+ return *p == '\0' ? 0 : -EINVAL;
+}
+
+static int __init parse_memmap_opt(char *str)
+{
+ while (str) {
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ parse_memmap_one(str);
+ str = k;
+ }
+
+ return 0;
+}
+early_param("memmap", parse_memmap_opt);
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+static void __init reserve_pmem(void)
+{
+ memblock_remove(pmem_start, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_start, pmem_start + pmem_size, pmem_size >> 20);
+ pmem_res.start = pmem_start;
+ pmem_res.end = pmem_start + pmem_size - 1;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -511,6 +604,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
diff --git a/drivers/nvdimm/Kconfig b/drivers/nvdimm/Kconfig
index b7d1eb38b..ce4de7526 100644
--- a/drivers/nvdimm/Kconfig
+++ b/drivers/nvdimm/Kconfig
@@ -132,3 +132,8 @@ config NVDIMM_TEST_BUILD
infrastructure.
endif
+
+config PMEM_LEGACY
+ tristate "Pmem_legacy"
+ select X86_PMEM_LEGACY if X86
+ select ARM64_PMEM_LEGACY_DEVICE if ARM64
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index 29203f3d3..6f8dc9242 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_LIBNVDIMM) += libnvdimm.o
obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
-obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
+obj-$(CONFIG_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o nd_virtio.o
--
2.19.1
3
4
各位好,
根据前期沟通,OSV、驱动兼容性SIG都对内核接口兼容性提出了诉求。根据前期
沟通的信息,以及收集的驱动接口使用情况,形成一份 20.03 LTS SP1 SP2 兼容
性 KABI 白名单初稿,请大家评审。
KABI (Kernel Application Binary Interface) 兼容,即内核与驱动的二进制兼容。
就是驱动不用重新编译,就可以在新内核上安装使用。如果驱动用到的接口都
是兼容的,那么驱动就可以不用重新编译就可以在新版本安装使用。上游社区
考虑到便于开发、和防止架构腐化,不考虑 KABI 的兼容性。业界的 KABI 兼容
都是由 Linux 发行版来做的,而且兼容的接口越多、维护时间越长,维护成本
也越高。
openEuler 通过多次与下游OSV和驱动团队交流,根据下游反馈的诉求,
openEuler 20.03 LTS SP1、SP2 版本,提供一定范围的 KABI 兼容能力,目的是
使常用的板卡驱动,在SP1、后续的 SP1 update 版本、SP2及SP2 update版本能
兼容。
kabi 兼容列表白名单的主要输入是驱动(ko),我们根据下游的反馈,收集到
部分常用驱动,作为兼容的目标,驱动列表如下。
ARM64 版本拟兼容的驱动(ko)列表
|amdgpu.ko bnx2.ko bnx2x.ko bnxt_en.ko bnxt_re.ko hclge.ko hclgevf.ko hifc.ko hinic.ko hnae3.ko hns3.ko i40e.ko ice.ko igb.ko ixgbe.ko ixgbevf.ko lpfc.ko megaraid_sas.ko mlx4_core.ko mlx4_ib.ko mlx5_core.ko mlx5_ib.ko mpt3sas.ko nouveau.ko nvme.ko qed.ko qede.ko qla2xxx.ko smartpqi.ko tg3.ko txgbe.ko |
X86 版本拟兼容的驱动(ko)列表
|amdgpu.ko bnx2.ko bnx2x.ko bnxt_en.ko bnxt_re.ko hifc.ko hinic.ko i40e.ko ice.ko igb.ko ixgbe.ko ixgbevf.ko lpfc.ko megaraid_sas.ko mlx4_core.ko mlx4_ib.ko mlx5_core.ko mlx5_ib.ko mpt3sas.ko nouveau.ko nvme.ko qed.ko qede.ko qla2xxx.ko smartpqi.ko tg3.ko txgbe.ko |
我们收集了上表中驱动使用到接口,形成了一份 kabi 白名单列表初稿,请大家评审。
重要的说明和提示:
1. 上述列表中多数驱动,还没有针对 openEuler SP1 的正式二进制版本,因此我们
根据开源的版本或相近的版本收集了相关 kabi 列表,可能与板卡厂商的最终发布
的版本存在少量差异。下游OSV或驱动团队如果发现有接口没在给出的列表中的,
可以在评审过程中提出来。
2. 如果有新增 KABI 兼容性的诉求,需要给出需要兼容的 kabi 接口名称,以及使用
到的驱动名称,以方便评估。
3. 评审反馈,可以在 issue 中反馈,https://gitee.com/openeuler/kernel/issues/I3ABVJ
也可以通过回复邮件反馈。
4. 收集评审反馈信息的时间为1周,截止到下周五(3月12日)下午17:00.
---
openEuler kernel SIG, 2021-3-6
---
附1:ARM64 平台 KABI 白名单列表初稿(1905个)
acpi_bus_get_device
acpi_check_dsm
acpi_dev_found
acpi_disabled
acpi_dma_configure
acpi_evaluate_dsm
acpi_evaluate_object
acpi_format_exception
acpi_gbl_FADT
acpi_get_devices
acpi_get_handle
acpi_get_name
acpi_get_table
acpi_gsi_to_irq
acpi_handle_printk
acpi_has_method
acpi_lid_open
acpi_os_map_memory
acpi_os_unmap_memory
acpi_register_gsi
acpi_unregister_gsi
add_timer
add_wait_queue
add_wait_queue_exclusive
admin_timeout
alloc_chrdev_region
alloc_cpu_rmap
__alloc_disk_node
alloc_etherdev_mqs
alloc_netdev_mqs
alloc_pages_current
__alloc_pages_nodemask
__alloc_percpu
__alloc_skb
__alloc_workqueue_key
__arch_clear_user
__arch_copy_from_user
__arch_copy_in_user
__arch_copy_to_user
arch_timer_read_counter
arch_wb_cache_pmem
arm64_const_caps_ready
arp_tbl
async_schedule
_atomic_dec_and_lock
atomic_notifier_call_chain
atomic_notifier_chain_register
atomic_notifier_chain_unregister
attribute_container_find_class_device
autoremove_wake_function
backlight_device_register
backlight_device_unregister
backlight_force_update
bdevname
bdev_read_only
bdget_disk
_bin2bcd
bio_add_page
bio_alloc_bioset
bio_clone_fast
bio_endio
bio_free_pages
bio_init
bio_put
bioset_exit
bioset_init
__bitmap_and
__bitmap_andnot
__bitmap_clear
__bitmap_complement
__bitmap_equal
bitmap_find_free_region
bitmap_find_next_zero_area_off
bitmap_free
__bitmap_intersects
__bitmap_or
__bitmap_parse
bitmap_parselist
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
__bitmap_weight
__bitmap_xor
bitmap_zalloc
bit_wait
blk_alloc_queue
blk_check_plugged
blk_cleanup_queue
blkdev_get_by_path
blkdev_issue_discard
blkdev_issue_write_same
blkdev_issue_zeroout
blkdev_put
blk_execute_rq
blk_execute_rq_nowait
blk_finish_plug
blk_get_queue
blk_get_request
blk_mq_alloc_tag_set
blk_mq_complete_request
blk_mq_end_request
blk_mq_free_request
blk_mq_free_tag_set
blk_mq_init_queue
blk_mq_map_queues
blk_mq_pci_map_queues
blk_mq_quiesce_queue
blk_mq_run_hw_queues
blk_mq_start_request
blk_mq_tagset_busy_iter
blk_mq_tag_to_rq
blk_mq_unique_tag
blk_mq_unquiesce_queue
blk_mq_update_nr_hw_queues
blk_put_queue
blk_put_request
blk_queue_bounce_limit
blk_queue_dma_alignment
blk_queue_flag_clear
blk_queue_flag_set
blk_queue_io_min
blk_queue_io_opt
blk_queue_logical_block_size
blk_queue_make_request
blk_queue_max_discard_sectors
blk_queue_max_hw_sectors
blk_queue_max_segments
blk_queue_max_segment_size
blk_queue_max_write_same_sectors
blk_queue_physical_block_size
blk_queue_rq_timeout
blk_queue_segment_boundary
blk_queue_split
blk_queue_stack_limits
blk_queue_update_dma_alignment
blk_queue_virt_boundary
blk_queue_write_cache
blk_rq_append_bio
blk_rq_count_integrity_sg
blk_rq_map_integrity_sg
blk_rq_map_kern
blk_rq_map_sg
blk_rq_map_user_iov
blk_rq_unmap_user
blk_set_stacking_limits
blk_start_plug
blk_status_to_errno
blk_verify_command
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_prog_add
bpf_prog_inc
bpf_prog_put
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run5
bpf_warn_invalid_xdp_action
bsg_job_done
btree_destroy
btree_geo32
btree_geo64
btree_get_prev
btree_init
btree_insert
btree_last
btree_lookup
btree_remove
btree_update
build_skb
cache_line_size
call_netdevice_notifiers
call_rcu_sched
call_srcu
call_usermodehelper
cancel_delayed_work
cancel_delayed_work_sync
cancel_work_sync
capable
cdev_add
cdev_del
cdev_init
__chash_table_copy_in
__chash_table_copy_out
__check_object_size
__class_create
class_destroy
__class_register
class_unregister
_cleanup_srcu_struct
clk_get_rate
commit_creds
compat_alloc_user_space
complete
complete_all
complete_and_exit
completion_done
component_add
component_del
_cond_resched
console_lock
console_unlock
__const_udelay
consume_skb
cpu_bit_bitmap
cpufreq_quick_get
__cpuhp_remove_state
__cpuhp_setup_state
cpu_hwcap_keys
cpu_hwcaps
cpumask_local_spread
cpumask_next
cpumask_next_and
cpu_number
__cpu_online_mask
__cpu_possible_mask
__cpu_present_mask
cpus_read_lock
cpus_read_unlock
crc32c
crc32_le
crc8
crc8_populate_msb
crc_t10dif
crypto_ahash_digest
crypto_ahash_final
crypto_ahash_setkey
crypto_alloc_ahash
crypto_destroy_tfm
crypto_register_shash
crypto_unregister_shash
csum_ipv6_magic
csum_partial
csum_tcpudp_nofold
_ctype
dcb_getapp
dcb_ieee_delapp
dcb_ieee_getapp_mask
dcb_ieee_setapp
dcbnl_cee_notify
dcb_setapp
debugfs_create_atomic_t
debugfs_create_dir
debugfs_create_file
debugfs_create_symlink
debugfs_create_u16
debugfs_create_u32
debugfs_create_u64
debugfs_create_u8
debugfs_lookup
debugfs_remove
default_llseek
default_wake_function
__delay
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
destroy_workqueue
dev_add_pack
dev_addr_add
dev_addr_del
dev_base_lock
dev_close
dev_driver_string
_dev_err
__dev_get_by_index
dev_get_by_index
dev_get_by_index_rcu
dev_get_by_name
dev_get_iflink
dev_get_stats
device_add_disk
device_create
device_create_file
device_destroy
device_link_add
device_release_driver
device_remove_file
device_reprobe
device_set_wakeup_capable
device_set_wakeup_enable
_dev_info
__dev_kfree_skb_any
__dev_kfree_skb_irq
devlink_alloc
devlink_free
devlink_param_driverinit_value_get
devlink_param_driverinit_value_set
devlink_params_register
devlink_params_unregister
devlink_param_value_changed
devlink_port_attrs_set
devlink_port_register
devlink_port_type_clear
devlink_port_type_eth_set
devlink_port_type_ib_set
devlink_port_unregister
devlink_region_create
devlink_region_destroy
devlink_region_shapshot_id_get
devlink_region_snapshot_create
devlink_register
devlink_unregister
dev_mc_add
dev_mc_add_excl
dev_mc_del
devm_free_irq
devm_hwmon_device_register_with_groups
devm_ioremap
devm_iounmap
devm_kfree
devm_kmalloc
devm_kmemdup
devm_mdiobus_alloc_size
devm_request_threaded_irq
_dev_notice
dev_open
dev_printk
dev_queue_xmit
dev_remove_pack
dev_set_mac_address
dev_set_mtu
dev_set_promiscuity
dev_trans_start
dev_uc_add
dev_uc_add_excl
dev_uc_del
_dev_warn
disable_irq
disable_irq_nosync
dma_alloc_from_dev_coherent
dma_common_get_sgtable
dma_common_mmap
dma_fence_add_callback
dma_fence_array_create
dma_fence_context_alloc
dma_fence_free
dma_fence_get_status
dma_fence_init
dma_fence_release
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_any_timeout
dma_fence_wait_timeout
dma_get_required_mask
dmam_alloc_coherent
dmam_free_coherent
dma_pool_alloc
dma_pool_create
dma_pool_destroy
dma_pool_free
dma_release_from_dev_coherent
dmi_check_system
dmi_get_system_info
dmi_match
__do_once_done
__do_once_start
do_wait_intr
down
downgrade_write
down_interruptible
down_read
down_read_trylock
down_timeout
down_trylock
down_write
down_write_killable
down_write_trylock
dput
dql_completed
dql_reset
drain_workqueue
driver_create_file
driver_for_each_device
driver_remove_file
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_plane_state
drm_atomic_helper_check
drm_atomic_helper_check_modeset
drm_atomic_helper_check_planes
drm_atomic_helper_check_plane_state
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit
drm_atomic_helper_commit_cleanup_done
drm_atomic_helper_commit_hw_done
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
__drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_disable_plane
drm_atomic_helper_legacy_gamma_set
drm_atomic_helper_page_flip
__drm_atomic_helper_plane_destroy_state
drm_atomic_helper_plane_destroy_state
__drm_atomic_helper_plane_duplicate_state
drm_atomic_helper_prepare_planes
drm_atomic_helper_resume
drm_atomic_helper_set_config
drm_atomic_helper_setup_commit
drm_atomic_helper_shutdown
drm_atomic_helper_suspend
drm_atomic_helper_swap_state
drm_atomic_helper_update_legacy_modeset_state
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_fences
drm_atomic_helper_wait_for_flip_done
drm_atomic_state_alloc
drm_atomic_state_default_clear
drm_atomic_state_default_release
__drm_atomic_state_free
drm_atomic_state_init
drm_calc_vbltimestamp_from_scanoutpos
drm_color_lut_extract
drm_compat_ioctl
drm_connector_attach_encoder
drm_connector_cleanup
drm_connector_init
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_register
drm_connector_set_path_property
drm_connector_unregister
drm_connector_update_edid_property
drm_crtc_accurate_vblank_count
drm_crtc_add_crc_entry
drm_crtc_arm_vblank_event
drm_crtc_cleanup
__drm_crtc_commit_free
drm_crtc_enable_color_mgmt
drm_crtc_force_disable_all
drm_crtc_from_index
drm_crtc_handle_vblank
drm_crtc_helper_set_config
drm_crtc_helper_set_mode
drm_crtc_init
drm_crtc_init_with_planes
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_get
drm_crtc_vblank_off
drm_crtc_vblank_on
drm_crtc_vblank_put
drm_cvt_mode
drm_dbg
drm_debugfs_create_files
drm_detect_hdmi_monitor
drm_detect_monitor_audio
drm_dev_alloc
drm_dev_put
drm_dev_register
drm_dev_unregister
drm_dp_atomic_find_vcpi_slots
drm_dp_atomic_release_vcpi_slots
drm_dp_aux_register
drm_dp_aux_unregister
drm_dp_bw_code_to_link_rate
drm_dp_calc_pbn_mode
drm_dp_channel_eq_ok
drm_dp_check_act_status
drm_dp_clock_recovery_ok
drm_dp_dpcd_read
drm_dp_dpcd_read_link_status
drm_dp_dpcd_write
drm_dp_find_vcpi_slots
drm_dp_get_adjust_request_pre_emphasis
drm_dp_get_adjust_request_voltage
drm_dp_link_rate_to_bw_code
drm_dp_link_train_channel_eq_delay
drm_dp_link_train_clock_recovery_delay
drm_dp_mst_allocate_vcpi
drm_dp_mst_deallocate_vcpi
drm_dp_mst_detect_port
drm_dp_mst_get_edid
drm_dp_mst_hpd_irq
drm_dp_mst_reset_vcpi_slots
drm_dp_mst_topology_mgr_destroy
drm_dp_mst_topology_mgr_init
drm_dp_mst_topology_mgr_resume
drm_dp_mst_topology_mgr_set_mst
drm_dp_mst_topology_mgr_suspend
drm_dp_update_payload_part1
drm_dp_update_payload_part2
drm_edid_header_is_valid
drm_edid_is_valid
drm_edid_to_sad
drm_edid_to_speaker_allocation
drm_encoder_cleanup
drm_encoder_init
drm_err
drm_fb_helper_add_one_connector
drm_fb_helper_alloc_fbi
drm_fb_helper_blank
drm_fb_helper_cfb_copyarea
drm_fb_helper_cfb_fillrect
drm_fb_helper_cfb_imageblit
drm_fb_helper_check_var
drm_fb_helper_debug_enter
drm_fb_helper_debug_leave
drm_fb_helper_fill_fix
drm_fb_helper_fill_var
drm_fb_helper_fini
drm_fb_helper_hotplug_event
drm_fb_helper_init
drm_fb_helper_initial_config
drm_fb_helper_ioctl
drm_fb_helper_lastclose
drm_fb_helper_output_poll_changed
drm_fb_helper_pan_display
drm_fb_helper_prepare
drm_fb_helper_remove_one_connector
drm_fb_helper_setcmap
drm_fb_helper_set_par
drm_fb_helper_set_suspend
drm_fb_helper_set_suspend_unlocked
drm_fb_helper_single_add_all_connectors
drm_fb_helper_unregister_fbi
drm_format_plane_cpp
drm_framebuffer_cleanup
drm_framebuffer_init
drm_framebuffer_unregister_private
drm_gem_dmabuf_kmap
drm_gem_dmabuf_kunmap
drm_gem_dmabuf_mmap
drm_gem_dmabuf_release
drm_gem_dmabuf_vmap
drm_gem_dmabuf_vunmap
drm_gem_fb_create_handle
drm_gem_fb_destroy
drm_gem_handle_create
drm_gem_handle_delete
drm_gem_map_attach
drm_gem_map_detach
drm_gem_map_dma_buf
drm_gem_object_init
drm_gem_object_lookup
drm_gem_object_put_unlocked
drm_gem_object_release
drm_gem_prime_export
drm_gem_prime_fd_to_handle
drm_gem_prime_handle_to_fd
drm_gem_prime_import
drm_gem_private_object_init
drm_gem_unmap_dma_buf
drm_get_edid
drm_get_edid_switcheroo
drm_get_format_name
drm_get_max_iomem
drm_global_item_ref
drm_global_item_unref
drm_handle_vblank
drm_hdmi_avi_infoframe_from_display_mode
drm_hdmi_vendor_infoframe_from_display_mode
drm_helper_connector_dpms
drm_helper_disable_unused_functions
drm_helper_hpd_irq_event
drm_helper_mode_fill_fb_struct
drm_helper_probe_single_connector_modes
drm_helper_resume_force_mode
drm_i2c_encoder_detect
drm_i2c_encoder_init
drm_i2c_encoder_mode_fixup
drm_i2c_encoder_restore
drm_i2c_encoder_save
drm_invalid_op
drm_ioctl
drm_irq_install
drm_irq_uninstall
drm_is_current_master
drm_kms_helper_hotplug_event
drm_kms_helper_is_poll_worker
drm_kms_helper_poll_disable
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drm_mm_init
drm_mm_insert_node_in_range
drm_mm_print
drm_mm_remove_node
drm_mm_takedown
drm_mode_config_cleanup
drm_mode_config_init
drm_mode_config_reset
drm_mode_copy
drm_mode_create_dvi_i_properties
drm_mode_create_scaling_mode_property
drm_mode_create_tv_properties
drm_mode_crtc_set_gamma_size
drm_mode_debug_printmodeline
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_get_hv_timing
drm_mode_is_420_only
drm_mode_legacy_fb_format
drm_mode_object_find
drm_mode_object_put
drm_mode_probed_add
drm_mode_set_crtcinfo
drm_modeset_lock
drm_modeset_lock_all
drm_modeset_lock_all_ctx
drm_mode_set_name
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_vrefresh
drm_object_attach_property
drm_object_property_set_value
drm_open
drm_plane_cleanup
drm_plane_create_alpha_property
drm_plane_create_color_properties
drm_plane_create_zpos_immutable_property
drm_plane_create_zpos_property
drm_plane_force_disable
drm_plane_init
drm_poll
drm_primary_helper_destroy
drm_primary_helper_funcs
drm_prime_gem_destroy
drm_prime_pages_to_sg
drm_prime_sg_to_page_addr_arrays
drm_printf
__drm_printfn_seq_file
drm_property_add_enum
drm_property_create
drm_property_create_enum
drm_property_create_range
__drm_puts_seq_file
drm_read
drm_release
drm_scdc_read
drm_scdc_write
drm_sched_dependency_optimized
drm_sched_entity_destroy
drm_sched_entity_fini
drm_sched_entity_flush
drm_sched_entity_init
drm_sched_entity_push_job
drm_sched_entity_set_rq
drm_sched_fini
drm_sched_hw_job_reset
drm_sched_init
drm_sched_job_init
drm_sched_job_recovery
drm_send_event_locked
drm_syncobj_create
drm_syncobj_find
drm_syncobj_find_fence
drm_syncobj_free
drm_syncobj_get_fd
drm_syncobj_get_handle
drm_syncobj_replace_fence
drm_universal_plane_init
drm_vblank_init
drm_vma_node_allow
drm_vma_node_is_allowed
drm_vma_node_revoke
dst_release
dummy_dma_ops
dump_stack
__dynamic_dev_dbg
__dynamic_netdev_dbg
__dynamic_pr_debug
elfcorehdr_addr
emergency_restart
enable_irq
errno_to_blk_status
ether_setup
eth_get_headlen
eth_platform_get_mac_address
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
ethtool_intersect_link_masks
ethtool_op_get_link
ethtool_op_get_ts_info
eth_type_trans
eth_validate_addr
eventfd_ctx_fdget
eventfd_ctx_put
eventfd_signal
event_triggers_call
fasync_helper
fc_attach_transport
fc_block_scsi_eh
fc_eh_timed_out
fc_get_event_number
fc_host_post_event
fc_host_post_vendor_event
fc_release_transport
fc_remote_port_add
fc_remote_port_delete
fc_remote_port_rolechg
fc_remove_host
fc_vport_create
fc_vport_terminate
fd_install
fget
filemap_fault
filp_close
filp_open
find_get_pid
find_last_bit
find_next_bit
find_next_zero_bit
find_pid_ns
find_vma
finish_wait
firmware_request_nowarn
fixed_size_llseek
flow_keys_dissector
flush_delayed_work
flush_signals
flush_work
flush_workqueue
force_sig
fortify_panic
fput
free_fib_info
free_irq
free_irq_cpu_rmap
free_netdev
__free_pages
free_pages
free_percpu
from_kgid
from_kuid
fs_bio_set
gcd
generate_random_uuid
generic_end_io_acct
generic_file_llseek
generic_handle_irq
generic_make_request
generic_start_io_acct
genlmsg_put
genl_register_family
genl_unregister_family
genphy_read_status
genphy_restart_aneg
get_device
__get_free_pages
get_gendisk
get_pid_task
get_random_bytes
__get_task_comm
get_task_mm
get_unused_fd_flags
get_user_pages
get_user_pages_remote
get_zeroed_page
gic_pmr_sync
groups_alloc
groups_free
handle_simple_irq
hdmi_avi_infoframe_pack
hdmi_infoframe_pack
hnae3_register_ae_algo
hnae3_register_ae_dev
hnae3_register_client
hnae3_set_client_init_flag
hnae3_unregister_ae_algo
hnae3_unregister_ae_dev
hnae3_unregister_client
hrtimer_cancel
hrtimer_forward
hrtimer_init
hrtimer_start_range_ns
hrtimer_try_to_cancel
__hw_addr_sync_dev
__hw_addr_unsync_dev
hwmon_device_register
hwmon_device_register_with_groups
hwmon_device_register_with_info
hwmon_device_unregister
i2c_add_adapter
i2c_bit_add_bus
i2c_bit_algo
i2c_del_adapter
i2c_new_device
i2c_smbus_read_byte_data
i2c_smbus_write_byte_data
i2c_transfer
i2c_unregister_device
__ib_alloc_cq
ib_alloc_device
ib_alloc_odp_umem
__ib_alloc_pd
__ib_alloc_xrcd
__ib_create_cq
ib_create_qp
ib_create_send_mad
ib_dealloc_device
ib_dealloc_pd
ib_dereg_mr
ib_destroy_cq
ib_destroy_qp
ib_dispatch_event
ib_find_cached_pkey
ib_free_cq
ib_free_send_mad
ib_get_cached_pkey
ib_get_eth_speed
ib_get_gids_from_rdma_hdr
ib_get_rdma_header_version
ib_modify_qp
ib_modify_qp_is_ok
ib_post_send_mad
ib_process_cq_direct
ib_query_pkey
ib_query_port
ib_query_qp
ib_register_device
ib_register_mad_agent
ib_sa_cancel_query
ib_sa_guid_info_rec_query
ib_sa_register_client
ib_sa_unregister_client
ib_sg_to_pages
ib_ud_header_init
ib_ud_header_pack
ib_ud_ip4_csum
ib_umem_copy_from
ib_umem_get
ib_umem_odp_map_dma_pages
ib_umem_odp_unmap_dma_pages
ib_umem_page_count
ib_umem_release
ib_unregister_device
ib_unregister_mad_agent
ib_uverbs_get_ucontext
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_alloc_cyclic
idr_alloc_u32
idr_destroy
idr_find
idr_for_each
idr_get_next
idr_get_next_ul
idr_preload
idr_remove
idr_replace
in4_pton
in6_pton
in_egroup_p
__inet6_lookup_established
__inet_lookup_established
in_group_p
init_net
__init_rwsem
init_srcu_struct
init_task
init_timer_key
init_uts_ns
init_wait_entry
__init_waitqueue_head
interval_tree_insert
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_to_scsilun
invalidate_partition
iomem_resource
iommu_get_domain_for_dev
iommu_iova_to_phys
iommu_map
iommu_unmap
__ioremap
ioremap_cache
io_schedule
io_schedule_timeout
__iounmap
__iowrite32_copy
__iowrite64_copy
ip6_dst_hoplimit
ip_compute_csum
ipmi_create_user
ipmi_destroy_user
ipmi_free_recv_msg
ipmi_poll_interface
ipmi_request_settime
ipmi_set_gets_events
ipmi_set_my_address
ipmi_smi_msg_received
ipmi_unregister_smi
ipmi_validate_addr
ip_route_output_flow
ip_send_check
__ipv6_addr_type
ipv6_ext_hdr
ipv6_find_hdr
ipv6_skip_exthdr
ipv6_stub
irq_cpu_rmap_add
irq_create_mapping
__irq_domain_add
irq_domain_remove
irq_find_mapping
irq_get_irq_data
irq_poll_complete
irq_poll_disable
irq_poll_enable
irq_poll_init
irq_poll_sched
irq_set_affinity_hint
irq_set_affinity_notifier
irq_set_chip_and_handler_name
irq_to_desc
is_acpi_device_node
jiffies
jiffies_64
jiffies_to_msecs
jiffies_to_timespec64
jiffies_to_usecs
kallsyms_lookup_name
kasprintf
kernel_recvmsg
kernel_sendmsg
kernel_setsockopt
kfree
kfree_call_rcu
kfree_const
kfree_skb
kgdb_active
kgdb_breakpoint
kill_fasync
kimage_voffset
__kmalloc
kmalloc_caches
__kmalloc_node
kmalloc_order_trace
kmem_cache_alloc
kmem_cache_alloc_node
kmem_cache_alloc_node_trace
kmem_cache_alloc_trace
kmem_cache_create
kmem_cache_create_usercopy
kmem_cache_destroy
kmem_cache_free
kmem_cache_shrink
kmemdup
kobject_add
kobject_create_and_add
kobject_del
kobject_get
kobject_init
kobject_init_and_add
kobject_put
kobject_set_name
kobject_uevent
kobject_uevent_env
krealloc
kset_create_and_add
kset_find_obj
kset_register
kset_unregister
ksize
kstrdup
kstrdup_const
kstrndup
kstrtobool
kstrtobool_from_user
kstrtoint
kstrtoint_from_user
kstrtoll
kstrtoll_from_user
kstrtou16
kstrtouint
kstrtouint_from_user
kstrtoul_from_user
kstrtoull
kstrtoull_from_user
kthread_bind
kthread_create_on_node
kthread_park
kthread_should_stop
kthread_stop
kthread_unpark
ktime_get
ktime_get_coarse_real_ts64
ktime_get_raw
ktime_get_raw_ts64
ktime_get_real_seconds
ktime_get_real_ts64
ktime_get_seconds
ktime_get_ts64
ktime_get_with_offset
kvasprintf
kvfree
kvmalloc_node
kzfree
led_classdev_resume
led_classdev_suspend
led_classdev_unregister
__list_add_valid
__list_del_entry_valid
llist_add_batch
__ll_sc_atomic64_add
__ll_sc_atomic64_add_return
__ll_sc_atomic64_andnot
__ll_sc_atomic64_fetch_add
__ll_sc_atomic64_fetch_andnot
__ll_sc_atomic64_fetch_andnot_release
__ll_sc_atomic64_fetch_or
__ll_sc_atomic64_fetch_or_acquire
__ll_sc_atomic64_fetch_xor
__ll_sc_atomic64_or
__ll_sc_atomic64_sub
__ll_sc_atomic64_sub_return
__ll_sc_atomic_add
__ll_sc_atomic_add_return
__ll_sc_atomic_add_return_acquire
__ll_sc_atomic_sub
__ll_sc_atomic_sub_return
__ll_sc_atomic_sub_return_release
__ll_sc___cmpxchg_case_acq_4
__ll_sc___cmpxchg_case_mb_4
__ll_sc___cmpxchg_case_mb_8
__local_bh_enable_ip
__lock_page
lock_page_memcg
lockref_get
lock_sock_nested
logic_inw
logic_outw
make_kgid
make_kuid
mark_page_accessed
match_strdup
match_string
match_token
_mcount
mdev_dev
mdev_from_dev
mdev_get_drvdata
mdev_parent_dev
mdev_register_device
mdev_register_driver
mdev_set_drvdata
mdev_unregister_device
mdev_unregister_driver
mdio45_probe
mdiobus_alloc_size
mdiobus_free
mdiobus_get_phy
mdiobus_read
__mdiobus_register
mdiobus_unregister
mdiobus_write
mdio_mii_ioctl
memchr
memchr_inv
memcmp
memcpy
__memcpy_fromio
__memcpy_toio
memdup_user
memdup_user_nul
memmove
memory_read_from_buffer
memparse
mempool_alloc
mempool_alloc_slab
mempool_create
mempool_create_node
mempool_destroy
mempool_free
mempool_free_slab
mempool_kfree
mempool_kmalloc
memscan
mem_section
memset
__memset_io
memstart_addr
memzero_explicit
metadata_dst_alloc
misc_deregister
misc_register
mlxfw_firmware_flash
mmput
__mmu_notifier_register
mmu_notifier_register
mmu_notifier_unregister
mmu_notifier_unregister_no_release
mod_delayed_work_on
mod_timer
mod_timer_pending
__module_get
module_layout
module_put
module_refcount
__msecs_to_jiffies
msleep
msleep_interruptible
__mutex_init
mutex_lock
mutex_lock_interruptible
mutex_lock_killable
mutex_trylock
mutex_unlock
__napi_alloc_skb
napi_complete_done
napi_consume_skb
napi_disable
napi_gro_flush
napi_gro_receive
napi_hash_del
__napi_schedule
__napi_schedule_irqoff
napi_schedule_prep
__ndelay
ndo_dflt_bridge_getlink
ndo_dflt_fdb_add
neigh_destroy
__neigh_event_send
neigh_lookup
netdev_alloc_frag
__netdev_alloc_skb
netdev_bind_sb_channel_queue
netdev_crit
netdev_err
netdev_features_change
netdev_info
netdev_lower_get_next
netdev_master_upper_dev_get
netdev_master_upper_dev_get_rcu
netdev_notice
netdev_printk
netdev_reset_tc
netdev_rss_key_fill
netdev_rx_handler_register
netdev_rx_handler_unregister
netdev_set_num_tc
netdev_set_sb_channel
netdev_set_tc_queue
netdev_unbind_sb_channel
netdev_update_features
netdev_walk_all_upper_dev_rcu
netdev_warn
netif_carrier_off
netif_carrier_on
netif_device_attach
netif_device_detach
netif_get_num_default_rss_queues
netif_napi_add
netif_napi_del
netif_receive_skb
netif_rx
netif_schedule_queue
netif_set_real_num_rx_queues
netif_set_real_num_tx_queues
netif_set_xps_queue
netif_tx_stop_all_queues
netif_tx_wake_queue
netlink_ack
netlink_broadcast
__netlink_kernel_create
netlink_kernel_release
netlink_unicast
net_ratelimit
nla_find
nla_parse
nla_put
__nlmsg_put
node_data
__node_distance
node_states
node_to_cpumask_map
no_llseek
noop_llseek
nr_cpu_ids
nr_node_ids
nsecs_to_jiffies
ns_to_timespec
ns_to_timespec64
ns_to_timeval
numa_node
nvme_alloc_request
nvme_cancel_request
nvme_change_ctrl_state
nvme_cleanup_cmd
nvme_complete_async_event
nvme_complete_rq
nvme_disable_ctrl
nvme_enable_ctrl
nvme_fc_register_localport
nvme_fc_register_remoteport
nvme_fc_set_remoteport_devloss
nvme_fc_unregister_localport
nvme_fc_unregister_remoteport
nvme_init_ctrl
nvme_init_identify
nvme_io_timeout
nvme_kill_queues
nvme_remove_namespaces
nvme_reset_ctrl
nvme_reset_ctrl_sync
nvme_set_queue_count
nvme_setup_cmd
nvme_shutdown_ctrl
nvme_start_ctrl
nvme_start_freeze
nvme_start_queues
nvme_stop_ctrl
nvme_stop_queues
nvme_submit_sync_cmd
nvmet_fc_rcv_fcp_abort
nvmet_fc_rcv_fcp_req
nvmet_fc_rcv_ls_req
nvmet_fc_register_targetport
nvmet_fc_unregister_targetport
nvme_unfreeze
nvme_uninit_ctrl
nvme_wait_freeze
nvme_wait_freeze_timeout
nvme_wq
of_led_classdev_register
on_each_cpu
orderly_poweroff
out_of_line_wait_on_bit
override_creds
__page_file_index
__page_frag_cache_drain
page_frag_free
__page_mapcount
page_mapped
page_pool_alloc_pages
page_pool_create
page_pool_destroy
__page_pool_put_page
pagevec_lookup_range
pagevec_lookup_range_tag
__pagevec_release
panic
panic_notifier_list
param_array_ops
param_get_int
param_ops_bool
param_ops_byte
param_ops_charp
param_ops_int
param_ops_long
param_ops_short
param_ops_string
param_ops_uint
param_ops_ullong
param_ops_ulong
param_ops_ushort
param_set_bool
param_set_int
pci_alloc_irq_vectors_affinity
pci_assign_unassigned_bus_resources
pcibios_resource_to_bus
pci_bus_read_config_dword
pci_bus_resource_n
pci_bus_type
pci_cfg_access_lock
pci_cfg_access_unlock
pci_check_and_mask_intx
pci_choose_state
pci_cleanup_aer_uncorrect_error_status
pci_clear_master
pci_clear_mwi
pci_d3cold_disable
pci_dev_driver
pci_dev_get
pci_device_is_present
pci_dev_present
pci_dev_put
pci_disable_device
pci_disable_link_state
pci_disable_msi
pci_disable_msix
pci_disable_pcie_error_reporting
pci_disable_rom
pci_disable_sriov
pcie_bandwidth_available
pcie_capability_clear_and_set_word
pcie_capability_read_dword
pcie_capability_read_word
pcie_capability_write_word
pcie_flr
pcie_get_speed_cap
pcie_get_width_cap
pci_enable_atomic_ops_to_root
pci_enable_device
pci_enable_device_mem
pci_enable_msi
pci_enable_msix_range
pci_enable_pcie_error_reporting
pci_enable_rom
pci_enable_sriov
pci_enable_wake
pcie_print_link_status
pcie_relaxed_ordering_enabled
pcie_set_readrq
pci_find_capability
pci_find_ext_capability
pci_free_irq
pci_free_irq_vectors
pci_get_class
pci_get_device
pci_get_domain_bus_and_slot
pci_get_slot
pci_ignore_hotplug
pci_intx
pci_iomap
pci_ioremap_bar
pci_irq_get_affinity
pci_irq_vector
pci_map_rom
pci_match_id
pcim_enable_device
pcim_iomap
pcim_iomap_regions
pcim_iomap_table
pcim_iounmap
pci_msi_mask_irq
pci_msi_unmask_irq
pci_num_vf
pci_platform_rom
pci_prepare_to_sleep
pci_read_config_byte
pci_read_config_dword
pci_read_config_word
pci_read_vpd
__pci_register_driver
pci_release_regions
pci_release_resource
pci_release_selected_regions
pci_request_irq
pci_request_regions
pci_request_selected_regions
pci_rescan_bus
pci_resize_resource
pci_restore_state
pci_save_state
pci_select_bars
pci_set_master
pci_set_mwi
pci_set_power_state
pci_sriov_configure_simple
pci_sriov_get_totalvfs
pci_sriov_set_totalvfs
pci_stop_and_remove_bus_device
pci_stop_and_remove_bus_device_locked
pci_try_set_mwi
pci_unmap_rom
pci_unregister_driver
pci_vfs_assigned
pci_vpd_find_info_keyword
pci_vpd_find_tag
pci_wait_for_pending_transaction
pci_wake_from_d3
pci_write_config_byte
pci_write_config_dword
pci_write_config_word
pcix_set_mmrbc
PDE_DATA
__per_cpu_offset
perf_trace_buf_alloc
perf_trace_run_bpf_submit
pfn_valid
phy_attached_info
phy_connect
phy_connect_direct
phy_disconnect
phy_ethtool_ksettings_get
phy_ethtool_ksettings_set
phy_loopback
phy_mii_ioctl
phy_resume
phy_start
phy_start_aneg
phy_stop
phy_suspend
pid_task
pid_vnr
pm_power_off
pm_runtime_allow
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
__pm_runtime_idle
__pm_runtime_resume
pm_runtime_set_autosuspend_delay
__pm_runtime_set_status
__pm_runtime_suspend
__pm_runtime_use_autosuspend
pm_schedule_suspend
power_supply_is_system_supplied
prandom_bytes
prandom_u32
prepare_creds
prepare_to_wait
prepare_to_wait_event
prepare_to_wait_exclusive
print_hex_dump
printk
__printk_ratelimit
print_stack_trace
proc_create_data
proc_mkdir
__pskb_copy_fclone
pskb_expand_head
__pskb_pull_tail
___pskb_trim
ptp_clock_event
ptp_clock_index
ptp_clock_register
ptp_clock_unregister
ptp_find_pin
__put_cred
put_device
put_disk
__put_page
__put_task_struct
put_unused_fd
qed_get_eth_ops
qed_put_eth_ops
queue_delayed_work_on
queued_read_lock_slowpath
queued_spin_lock_slowpath
queued_write_lock_slowpath
queue_work_on
radix_tree_delete
radix_tree_gang_lookup
__radix_tree_insert
radix_tree_lookup
radix_tree_tagged
raid_class_attach
raid_class_release
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
raw_notifier_chain_unregister
rb_erase
__rb_erase_color
rb_first
rb_first_postorder
__rb_insert_augmented
rb_insert_color
rb_next
rb_next_postorder
rb_replace_node
rbt_ib_umem_for_each_in_range
rbt_ib_umem_lookup
rcu_barrier
rdma_create_ah
rdma_destroy_ah
rdma_is_zero_gid
rdma_port_get_link_layer
rdma_query_ah
rdma_query_gid
rdma_restrack_get
rdma_restrack_put
rdma_roce_rescan_device
read_cache_pages
recalc_sigpending
refcount_dec_and_mutex_lock
refcount_dec_and_test_checked
refcount_dec_checked
refcount_inc_checked
refcount_inc_not_zero_checked
register_acpi_notifier
register_blkdev
__register_chrdev
register_chrdev_region
register_fib_notifier
register_inet6addr_notifier
register_inetaddr_notifier
register_netdev
register_netdevice_notifier
register_netevent_notifier
register_reboot_notifier
release_firmware
release_pages
__release_region
release_sock
remap_pfn_range
remove_conflicting_framebuffers
remove_proc_entry
remove_wait_queue
request_firmware
request_firmware_direct
request_firmware_nowait
__request_module
__request_region
request_threaded_irq
reservation_object_add_excl_fence
reservation_object_add_shared_fence
reservation_object_get_fences_rcu
reservation_object_reserve_shared
reservation_object_wait_timeout_rcu
reservation_ww_class
reset_devices
revert_creds
rhashtable_destroy
rhashtable_free_and_destroy
rhashtable_init
rhashtable_insert_slow
rhashtable_walk_enter
rhashtable_walk_exit
rhashtable_walk_next
rhashtable_walk_start_check
rhashtable_walk_stop
rhltable_init
rht_bucket_nested
rht_bucket_nested_insert
round_jiffies
round_jiffies_relative
rps_may_expire_flow
rtc_time64_to_tm
rtnl_is_locked
rtnl_lock
rtnl_trylock
rtnl_unlock
sas_attach_transport
sas_disable_tlr
sas_enable_tlr
sas_end_device_alloc
sas_expander_alloc
sas_is_tlr_enabled
sas_phy_add
sas_phy_alloc
sas_phy_free
sas_port_add
sas_port_add_phy
sas_port_alloc_num
sas_port_delete
sas_port_delete_phy
sas_port_free
sas_read_port_mode_page
sas_release_transport
sas_remove_host
sas_rphy_add
save_stack_trace
save_stack_trace_tsk
sbitmap_queue_clear
__sbitmap_queue_get
sched_clock
sched_setscheduler
schedule
schedule_hrtimeout
schedule_hrtimeout_range
schedule_timeout
schedule_timeout_interruptible
schedule_timeout_uninterruptible
scmd_printk
scnprintf
scsi_add_device
scsi_add_host_with_dma
scsi_block_requests
scsi_build_sense_buffer
scsi_change_queue_depth
scsi_device_get
scsi_device_lookup
scsi_device_put
scsi_device_set_state
scsi_device_type
scsi_dma_map
scsi_dma_unmap
__scsi_execute
scsi_get_vpd_page
scsi_host_alloc
scsi_host_busy
scsi_host_get
scsi_host_lookup
scsi_host_put
scsi_internal_device_block_nowait
scsi_internal_device_unblock_nowait
scsi_is_fc_rport
scsi_is_host_device
scsi_is_sdev_device
__scsi_iterate_devices
scsilun_to_int
scsi_normalize_sense
scsi_print_command
scsi_register_driver
scsi_remove_device
scsi_remove_host
scsi_remove_target
scsi_sanitize_inquiry_string
scsi_scan_host
scsi_unblock_requests
sdev_prefix_printk
secpath_dup
security_d_instantiate
send_sig
seq_lseek
seq_open
seq_printf
seq_putc
seq_puts
seq_read
seq_release
seq_write
set_cpus_allowed_ptr
set_current_groups
set_device_ro
set_disk_ro
set_freezable
set_normalized_timespec64
set_page_dirty
set_page_dirty_lock
set_user_nice
sg_alloc_table_from_pages
sg_copy_from_buffer
sg_copy_to_buffer
sg_free_table
sg_init_table
sg_miter_next
sg_miter_start
sg_miter_stop
sg_next
sigprocmask
si_meminfo
simple_attr_open
simple_attr_read
simple_attr_release
simple_attr_write
simple_open
simple_read_from_buffer
simple_strtol
simple_strtoul
simple_strtoull
simple_write_to_buffer
single_open
single_release
skb_add_rx_frag
skb_checksum
skb_checksum_help
skb_clone
skb_clone_tx_timestamp
skb_copy
skb_copy_bits
skb_copy_expand
skb_dequeue
__skb_flow_dissect
__skb_get_hash
__skb_gso_segment
skb_gso_validate_mac_len
__skb_pad
skb_pull
skb_push
skb_put
skb_queue_purge
skb_queue_tail
skb_realloc_headroom
skb_store_bits
skb_trim
skb_tstamp_tx
skb_vlan_pop
sk_free
smp_call_function_many
smp_call_function_single
snprintf
sock_create_kern
sock_edemux
sock_queue_err_skb
sock_release
softnet_data
sort
sprintf
srcu_barrier
__srcu_read_lock
__srcu_read_unlock
sscanf
__stack_chk_fail
__stack_chk_guard
starget_for_each_device
strcasecmp
strcat
strchr
strcmp
strcpy
strcspn
strim
strlcat
strlcpy
strlen
strncasecmp
strncat
strncmp
strncpy
strncpy_from_user
strnlen
strnstr
strpbrk
strrchr
strsep
strspn
strstr
submit_bio
__sw_hweight32
__sw_hweight64
__sw_hweight8
swiotlb_nr_tbl
switchdev_port_same_parent_id
__symbol_put
sync_file_create
synchronize_irq
synchronize_net
synchronize_sched
synchronize_srcu
sysfs_add_file_to_group
sysfs_create_bin_file
sysfs_create_file_ns
sysfs_create_group
sysfs_remove_bin_file
sysfs_remove_file_from_group
sysfs_remove_file_ns
sysfs_remove_group
sysfs_streq
system_state
system_unbound_wq
system_wq
sys_tz
task_active_pid_ns
tasklet_init
tasklet_kill
__tasklet_schedule
__task_pid_nr_ns
tcf_block_cb_register
tcf_block_cb_unregister
tcp_gro_complete
tcp_hashinfo
tc_setup_cb_egdev_register
tc_setup_cb_egdev_unregister
time64_to_tm
timecounter_cyc2time
timecounter_init
timecounter_read
tls_get_record
tls_validate_xmit_skb
to_drm_sched_fence
trace_define_field
trace_event_buffer_commit
trace_event_buffer_reserve
trace_event_ignore_this_pid
trace_event_raw_init
trace_event_reg
trace_handle_return
__tracepoint_dma_fence_emit
__tracepoint_xdp_exception
trace_print_array_seq
trace_print_flags_seq
trace_raw_output_prep
trace_seq_printf
trace_seq_putc
try_module_get
try_wait_for_completion
ttm_bo_add_to_lru
ttm_bo_clean_mm
ttm_bo_del_sub_from_lru
ttm_bo_device_init
ttm_bo_device_release
ttm_bo_dma_acc_size
ttm_bo_eviction_valuable
ttm_bo_evict_mm
ttm_bo_global_init
ttm_bo_global_release
ttm_bo_init
ttm_bo_init_mm
ttm_bo_init_reserved
ttm_bo_kmap
ttm_bo_kunmap
ttm_bo_lock_delayed_workqueue
ttm_bo_manager_func
ttm_bo_mem_put
ttm_bo_mem_space
ttm_bo_mmap
ttm_bo_move_accel_cleanup
ttm_bo_move_memcpy
ttm_bo_move_to_lru_tail
ttm_bo_move_ttm
ttm_bo_pipeline_move
ttm_bo_put
ttm_bo_unlock_delayed_workqueue
ttm_bo_validate
ttm_bo_wait
ttm_dma_page_alloc_debugfs
ttm_dma_populate
ttm_dma_tt_fini
ttm_dma_tt_init
ttm_dma_unpopulate
ttm_eu_backoff_reservation
ttm_eu_fence_buffer_objects
ttm_eu_reserve_buffers
ttm_fbdev_mmap
ttm_mem_global_init
ttm_mem_global_release
ttm_page_alloc_debugfs
ttm_pool_populate
ttm_pool_unpopulate
ttm_populate_and_map_pages
ttm_sg_tt_init
ttm_tt_bind
ttm_tt_set_placement_caching
ttm_unmap_and_unpopulate_pages
__udelay
udp4_hwcsum
uio_event_notify
__uio_register_device
uio_unregister_device
unlock_page
unlock_page_memcg
unmap_mapping_range
unregister_acpi_notifier
unregister_blkdev
__unregister_chrdev
unregister_chrdev_region
unregister_fib_notifier
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netdev
unregister_netdevice_notifier
unregister_netevent_notifier
unregister_reboot_notifier
unuse_mm
up
up_read
up_write
__usecs_to_jiffies
use_mm
usleep_range
_uverbs_alloc
uverbs_copy_to
uverbs_destroy_def_handler
uverbs_fd_class
uverbs_get_flags32
uverbs_get_flags64
uverbs_idr_class
vfree
vga_client_register
vlan_dev_real_dev
vlan_dev_vlan_id
vlan_dev_vlan_proto
vmalloc
__vmalloc
vmalloc_node
vmalloc_to_page
vmap
vm_insert_page
vm_mmap
vm_munmap
vprintk
vscnprintf
vsnprintf
vsprintf
vunmap
vzalloc
vzalloc_node
wait_for_completion
wait_for_completion_interruptible
wait_for_completion_interruptible_timeout
wait_for_completion_io_timeout
wait_for_completion_killable
wait_for_completion_timeout
wait_on_page_bit
__wake_up
wake_up_bit
__wake_up_locked
wake_up_process
__warn_printk
work_busy
write_cache_pages
ww_mutex_lock
ww_mutex_lock_interruptible
ww_mutex_unlock
xdp_do_flush_map
xdp_do_redirect
xdp_return_frame
xdp_return_frame_rx_napi
xdp_rxq_info_is_reg
xdp_rxq_info_reg
xdp_rxq_info_reg_mem_model
xdp_rxq_info_unreg
xdp_rxq_info_unused
xfrm_replay_seqhi
xz_dec_end
xz_dec_init
xz_dec_run
yield
zap_vma_ptes
zlib_inflate
zlib_inflateEnd
zlib_inflateInit2
zlib_inflate_workspacesize
附2:x86 平台 KABI 白名单列表初稿(2228个)
acpi_bus_get_device
acpi_bus_register_driver
acpi_bus_unregister_driver
acpi_check_dsm
acpi_dev_found
acpi_disabled
acpi_dma_configure
acpi_evaluate_dsm
acpi_evaluate_integer
acpi_evaluate_object
acpi_format_exception
acpi_gbl_FADT
acpi_get_devices
acpi_get_handle
acpi_get_name
acpi_get_table
acpi_gsi_to_irq
acpi_handle_printk
acpi_has_method
acpi_install_notify_handler
acpi_lid_open
acpi_os_map_memory
acpi_os_unmap_memory
acpi_register_gsi
acpi_remove_notify_handler
acpi_unregister_gsi
acpi_video_get_edid
acpi_walk_namespace
address_space_init_once
add_timer
add_wait_queue
add_wait_queue_exclusive
admin_timeout
alloc_chrdev_region
alloc_cpumask_var
alloc_cpu_rmap
__alloc_disk_node
alloc_etherdev_mqs
alloc_netdev_mqs
alloc_pages_current
__alloc_pages_nodemask
__alloc_percpu
__alloc_skb
__alloc_workqueue_key
anon_inode_getfile
apic
arch_dma_alloc_attrs
arch_io_free_memtype_wc
arch_io_reserve_memtype_wc
arch_phys_wc_add
arch_phys_wc_del
arch_wb_cache_pmem
arp_tbl
async_schedule
_atomic_dec_and_lock
atomic_notifier_call_chain
atomic_notifier_chain_register
atomic_notifier_chain_unregister
attribute_container_find_class_device
autoremove_wake_function
backlight_device_register
backlight_device_unregister
backlight_force_update
bdevname
bdev_read_only
bdget_disk
_bin2bcd
bio_add_page
bio_alloc_bioset
bio_clone_fast
bio_endio
bio_free_pages
bio_init
bio_put
bioset_exit
bioset_init
__bitmap_and
__bitmap_andnot
__bitmap_clear
__bitmap_complement
__bitmap_equal
bitmap_find_free_region
bitmap_find_next_zero_area_off
__bitmap_intersects
__bitmap_or
__bitmap_parse
bitmap_parselist
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
__bitmap_shift_left
__bitmap_shift_right
__bitmap_subset
__bitmap_weight
__bitmap_xor
bitmap_zalloc
bit_wait
blk_alloc_queue
blk_check_plugged
blk_cleanup_queue
blkdev_get_by_path
blkdev_issue_discard
blkdev_issue_write_same
blkdev_issue_zeroout
blkdev_put
blk_execute_rq
blk_execute_rq_nowait
blk_finish_plug
blk_get_queue
blk_get_request
blk_init_tags
blk_mq_alloc_tag_set
blk_mq_complete_request
blk_mq_end_request
blk_mq_free_request
blk_mq_free_tag_set
blk_mq_init_queue
blk_mq_map_queues
blk_mq_pci_map_queues
blk_mq_quiesce_queue
blk_mq_run_hw_queues
blk_mq_start_request
blk_mq_tagset_busy_iter
blk_mq_tag_to_rq
blk_mq_unique_tag
blk_mq_unquiesce_queue
blk_mq_update_nr_hw_queues
blk_put_queue
blk_put_request
blk_queue_bounce_limit
blk_queue_dma_alignment
blk_queue_flag_clear
blk_queue_flag_set
blk_queue_free_tags
blk_queue_init_tags
blk_queue_io_min
blk_queue_io_opt
blk_queue_logical_block_size
blk_queue_make_request
blk_queue_max_discard_sectors
blk_queue_max_hw_sectors
blk_queue_max_segments
blk_queue_max_segment_size
blk_queue_max_write_same_sectors
blk_queue_physical_block_size
blk_queue_rq_timeout
blk_queue_segment_boundary
blk_queue_split
blk_queue_stack_limits
blk_queue_update_dma_alignment
blk_queue_virt_boundary
blk_queue_write_cache
blk_rq_append_bio
blk_rq_count_integrity_sg
blk_rq_map_integrity_sg
blk_rq_map_kern
blk_rq_map_sg
blk_rq_map_user_iov
blk_rq_unmap_user
blk_set_stacking_limits
blk_start_plug
blk_status_to_errno
blk_verify_command
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
boot_cpu_data
bpf_prog_add
bpf_prog_inc
bpf_prog_put
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run5
bpf_warn_invalid_xdp_action
bsg_job_done
btree_destroy
btree_geo32
btree_geo64
btree_get_prev
btree_init
btree_insert
btree_last
btree_lookup
btree_remove
btree_update
build_skb
bus_find_device_by_name
__cachemode2pte_tbl
call_netdevice_notifiers
call_rcu_sched
call_usermodehelper
cancel_delayed_work
cancel_delayed_work_sync
cancel_work_sync
capable
cdev_add
cdev_del
cdev_device_add
cdev_device_del
cdev_init
__chash_table_copy_in
__chash_table_copy_out
__check_object_size
__class_create
class_create_file_ns
class_destroy
__class_register
class_remove_file_ns
class_unregister
_cleanup_srcu_struct
clear_user
clk_get_rate
cm_class
cnic_register_driver
cnic_unregister_driver
commit_creds
compat_alloc_user_space
complete
complete_all
complete_and_exit
completion_done
component_add
component_del
_cond_resched
configfs_register_subsystem
configfs_remove_default_groups
configfs_unregister_subsystem
config_group_init
config_group_init_type_name
config_item_put
console_lock
console_unlock
__const_udelay
consume_skb
_copy_from_user
_copy_to_user
copy_user_enhanced_fast_string
copy_user_generic_string
copy_user_generic_unrolled
cpu_bit_bitmap
cpu_core_map
cpufreq_get
cpufreq_quick_get
__cpuhp_remove_state
__cpuhp_setup_state
cpu_info
cpu_khz
cpumask_local_spread
cpumask_next
cpumask_next_and
cpu_number
__cpu_online_mask
__cpu_possible_mask
__cpu_present_mask
cpu_sibling_map
cpus_read_lock
cpus_read_unlock
crc32c
crc32_le
crc8
crc8_populate_msb
crc_t10dif
crypto_ahash_digest
crypto_ahash_final
crypto_ahash_setkey
crypto_alloc_ahash
crypto_destroy_tfm
crypto_register_shash
crypto_unregister_shash
csum_ipv6_magic
csum_partial
_ctype
current_task
dca3_get_tag
dca_add_requester
dca_register_notify
dca_remove_requester
dca_unregister_notify
dcb_getapp
dcb_ieee_delapp
dcb_ieee_getapp_mask
dcb_ieee_setapp
dcbnl_cee_notify
dcbnl_ieee_notify
dcb_setapp
debugfs_create_atomic_t
debugfs_create_dir
debugfs_create_file
debugfs_create_u32
debugfs_create_u64
debugfs_create_u8
debugfs_lookup
debugfs_remove
__default_kernel_pte_mask
default_llseek
default_wake_function
__delay
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
destroy_workqueue
dev_add_pack
dev_addr_add
dev_addr_del
dev_base_lock
dev_close
dev_driver_string
_dev_err
__dev_get_by_index
dev_get_by_index
dev_get_by_index_rcu
dev_get_by_name
dev_get_iflink
dev_get_stats
device_add_disk
device_create
device_create_file
device_destroy
device_initialize
device_link_add
device_release_driver
device_remove_file
device_reprobe
device_set_wakeup_capable
device_set_wakeup_enable
_dev_info
__dev_kfree_skb_any
__dev_kfree_skb_irq
devlink_alloc
devlink_free
devlink_param_driverinit_value_get
devlink_param_driverinit_value_set
devlink_params_register
devlink_params_unregister
devlink_param_value_changed
devlink_port_attrs_set
devlink_port_register
devlink_port_type_clear
devlink_port_type_ib_set
devlink_port_unregister
devlink_region_create
devlink_region_destroy
devlink_region_shapshot_id_get
devlink_region_snapshot_create
devlink_register
devlink_unregister
devmap_managed_key
dev_mc_add
dev_mc_add_excl
dev_mc_del
devm_free_irq
devm_hwmon_device_register_with_groups
devm_ioremap
devm_iounmap
devm_kfree
devm_kmalloc
devm_kmemdup
devm_request_threaded_irq
_dev_notice
dev_printk
dev_queue_xmit
__dev_remove_pack
dev_remove_pack
dev_set_mac_address
dev_set_mtu
dev_set_name
dev_set_promiscuity
dev_trans_start
dev_uc_add
dev_uc_add_excl
dev_uc_del
_dev_warn
disable_irq
disable_irq_nosync
dma_fence_add_callback
dma_fence_array_create
dma_fence_context_alloc
dma_fence_free
dma_fence_get_status
dma_fence_init
dma_fence_release
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_any_timeout
dma_fence_wait_timeout
dma_get_required_mask
dmam_alloc_coherent
dmam_free_coherent
dma_ops
dma_pool_alloc
dma_pool_create
dma_pool_destroy
dma_pool_free
dmi_check_system
dmi_get_system_info
dmi_match
__do_once_done
__do_once_start
do_wait_intr
down
downgrade_write
down_interruptible
down_read
down_read_trylock
down_timeout
down_trylock
down_write
down_write_killable
down_write_trylock
dput
dql_completed
dql_reset
drain_workqueue
driver_create_file
driver_for_each_device
driver_remove_file
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_plane_state
drm_atomic_helper_check
drm_atomic_helper_check_modeset
drm_atomic_helper_check_planes
drm_atomic_helper_check_plane_state
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit
drm_atomic_helper_commit_cleanup_done
drm_atomic_helper_commit_hw_done
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
__drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_disable_plane
drm_atomic_helper_legacy_gamma_set
drm_atomic_helper_page_flip
__drm_atomic_helper_plane_destroy_state
drm_atomic_helper_plane_destroy_state
__drm_atomic_helper_plane_duplicate_state
drm_atomic_helper_prepare_planes
drm_atomic_helper_resume
drm_atomic_helper_set_config
drm_atomic_helper_setup_commit
drm_atomic_helper_shutdown
drm_atomic_helper_suspend
drm_atomic_helper_swap_state
drm_atomic_helper_update_legacy_modeset_state
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_fences
drm_atomic_helper_wait_for_flip_done
drm_atomic_state_alloc
drm_atomic_state_default_clear
drm_atomic_state_default_release
__drm_atomic_state_free
drm_atomic_state_init
drm_calc_vbltimestamp_from_scanoutpos
drm_color_lut_extract
drm_compat_ioctl
drm_connector_attach_encoder
drm_connector_cleanup
drm_connector_init
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_register
drm_connector_set_path_property
drm_connector_unregister
drm_connector_update_edid_property
drm_crtc_accurate_vblank_count
drm_crtc_add_crc_entry
drm_crtc_arm_vblank_event
drm_crtc_cleanup
__drm_crtc_commit_free
drm_crtc_enable_color_mgmt
drm_crtc_force_disable_all
drm_crtc_from_index
drm_crtc_handle_vblank
drm_crtc_helper_set_config
drm_crtc_helper_set_mode
drm_crtc_init
drm_crtc_init_with_planes
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_get
drm_crtc_vblank_off
drm_crtc_vblank_on
drm_crtc_vblank_put
drm_cvt_mode
drm_dbg
drm_debug
drm_debugfs_create_files
drm_detect_hdmi_monitor
drm_detect_monitor_audio
drm_dev_alloc
drm_dev_put
drm_dev_register
drm_dev_unref
drm_dev_unregister
drm_dp_atomic_find_vcpi_slots
drm_dp_atomic_release_vcpi_slots
drm_dp_aux_register
drm_dp_aux_unregister
drm_dp_bw_code_to_link_rate
drm_dp_calc_pbn_mode
drm_dp_channel_eq_ok
drm_dp_check_act_status
drm_dp_clock_recovery_ok
drm_dp_dpcd_read
drm_dp_dpcd_read_link_status
drm_dp_dpcd_write
drm_dp_find_vcpi_slots
drm_dp_get_adjust_request_pre_emphasis
drm_dp_get_adjust_request_voltage
drm_dp_link_rate_to_bw_code
drm_dp_link_train_channel_eq_delay
drm_dp_link_train_clock_recovery_delay
drm_dp_mst_allocate_vcpi
drm_dp_mst_deallocate_vcpi
drm_dp_mst_detect_port
drm_dp_mst_get_edid
drm_dp_mst_hpd_irq
drm_dp_mst_reset_vcpi_slots
drm_dp_mst_topology_mgr_destroy
drm_dp_mst_topology_mgr_init
drm_dp_mst_topology_mgr_resume
drm_dp_mst_topology_mgr_set_mst
drm_dp_mst_topology_mgr_suspend
drm_dp_update_payload_part1
drm_dp_update_payload_part2
drm_edid_header_is_valid
drm_edid_is_valid
drm_edid_to_sad
drm_edid_to_speaker_allocation
drm_encoder_cleanup
drm_encoder_init
drm_err
drm_fb_helper_add_one_connector
drm_fb_helper_alloc_fbi
drm_fb_helper_blank
drm_fb_helper_cfb_copyarea
drm_fb_helper_cfb_fillrect
drm_fb_helper_cfb_imageblit
drm_fb_helper_check_var
drm_fb_helper_debug_enter
drm_fb_helper_debug_leave
drm_fb_helper_fill_fix
drm_fb_helper_fill_var
drm_fb_helper_fini
drm_fb_helper_hotplug_event
drm_fb_helper_init
drm_fb_helper_initial_config
drm_fb_helper_ioctl
drm_fb_helper_lastclose
drm_fb_helper_output_poll_changed
drm_fb_helper_pan_display
drm_fb_helper_prepare
drm_fb_helper_remove_one_connector
drm_fb_helper_setcmap
drm_fb_helper_set_par
drm_fb_helper_set_suspend
drm_fb_helper_set_suspend_unlocked
drm_fb_helper_single_add_all_connectors
drm_fb_helper_unregister_fbi
drm_format_plane_cpp
drm_framebuffer_cleanup
drm_framebuffer_init
drm_framebuffer_unregister_private
drm_gem_dmabuf_kmap
drm_gem_dmabuf_kunmap
drm_gem_dmabuf_mmap
drm_gem_dmabuf_release
drm_gem_dmabuf_vmap
drm_gem_dmabuf_vunmap
drm_gem_fb_create_handle
drm_gem_fb_destroy
drm_gem_handle_create
drm_gem_handle_delete
drm_gem_map_attach
drm_gem_map_detach
drm_gem_map_dma_buf
drm_gem_object_free
drm_gem_object_init
drm_gem_object_lookup
drm_gem_object_put_unlocked
drm_gem_object_release
drm_gem_prime_export
drm_gem_prime_fd_to_handle
drm_gem_prime_handle_to_fd
drm_gem_prime_import
drm_gem_private_object_init
drm_gem_unmap_dma_buf
drm_get_edid
drm_get_edid_switcheroo
drm_get_format_name
drm_get_max_iomem
drm_global_item_ref
drm_global_item_unref
drm_handle_vblank
drm_hdmi_avi_infoframe_from_display_mode
drm_hdmi_vendor_infoframe_from_display_mode
drm_helper_connector_dpms
drm_helper_disable_unused_functions
drm_helper_hpd_irq_event
drm_helper_mode_fill_fb_struct
drm_helper_probe_single_connector_modes
drm_helper_resume_force_mode
drm_i2c_encoder_detect
drm_i2c_encoder_init
drm_i2c_encoder_mode_fixup
drm_i2c_encoder_restore
drm_i2c_encoder_save
drm_invalid_op
drm_ioctl
drm_irq_install
drm_irq_uninstall
drm_is_current_master
drm_kms_helper_hotplug_event
drm_kms_helper_is_poll_worker
drm_kms_helper_poll_disable
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drm_mm_init
drm_mm_insert_node_in_range
drm_mm_print
drm_mm_remove_node
drm_mm_takedown
drm_mode_config_cleanup
drm_mode_config_init
drm_mode_config_reset
drm_mode_copy
drm_mode_create_dvi_i_properties
drm_mode_create_scaling_mode_property
drm_mode_create_tv_properties
drm_mode_crtc_set_gamma_size
drm_mode_debug_printmodeline
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_get_hv_timing
drm_mode_is_420_only
drm_mode_legacy_fb_format
drm_mode_object_find
drm_mode_object_put
drm_mode_probed_add
drm_mode_set_crtcinfo
drm_modeset_lock
drm_modeset_lock_all
drm_modeset_lock_all_ctx
drm_mode_set_name
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_vrefresh
drm_object_attach_property
drm_object_property_set_value
drm_open
drm_plane_cleanup
drm_plane_create_alpha_property
drm_plane_create_color_properties
drm_plane_create_zpos_immutable_property
drm_plane_create_zpos_property
drm_plane_force_disable
drm_plane_init
drm_poll
drm_primary_helper_destroy
drm_primary_helper_funcs
drm_prime_gem_destroy
drm_prime_pages_to_sg
drm_prime_sg_to_page_addr_arrays
drm_printf
__drm_printfn_seq_file
drm_property_add_enum
drm_property_create
drm_property_create_enum
drm_property_create_range
__drm_puts_seq_file
drm_read
drm_release
drm_scdc_read
drm_scdc_write
drm_sched_dependency_optimized
drm_sched_entity_destroy
drm_sched_entity_fini
drm_sched_entity_flush
drm_sched_entity_init
drm_sched_entity_push_job
drm_sched_entity_set_rq
drm_sched_fini
drm_sched_hw_job_reset
drm_sched_init
drm_sched_job_init
drm_sched_job_recovery
drm_send_event_locked
drm_syncobj_create
drm_syncobj_find
drm_syncobj_find_fence
drm_syncobj_free
drm_syncobj_get_fd
drm_syncobj_get_handle
drm_syncobj_replace_fence
drm_universal_plane_init
drm_vblank_init
drm_vma_node_allow
drm_vma_node_is_allowed
drm_vma_node_revoke
dst_release
dump_stack
__dynamic_dev_dbg
__dynamic_netdev_dbg
__dynamic_pr_debug
efi
elfcorehdr_addr
emergency_restart
empty_zero_page
enable_irq
errno_to_blk_status
ether_setup
eth_get_headlen
eth_platform_get_mac_address
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
__ethtool_get_link_ksettings
ethtool_intersect_link_masks
ethtool_op_get_link
ethtool_op_get_ts_info
eth_type_trans
eth_validate_addr
event_triggers_call
ex_handler_default
ex_handler_refcount
fasync_helper
fc_attach_transport
fc_block_scsi_eh
fc_disc_config
fc_disc_init
fc_eh_host_reset
fc_eh_timed_out
fc_elsct_init
fc_elsct_send
fc_exch_init
fc_exch_mgr_alloc
fc_exch_mgr_free
fc_exch_mgr_list_clone
fc_exch_recv
fc_fabric_login
fc_fabric_logoff
_fc_frame_alloc
fc_frame_alloc_fill
fc_get_event_number
fc_get_host_port_state
fc_get_host_speed
fc_get_host_stats
fc_host_post_event
fc_host_post_vendor_event
fc_lport_bsg_request
fc_lport_config
fc_lport_destroy
fc_lport_flogi_resp
fc_lport_init
fc_lport_logo_resp
fc_lport_reset
fcoe_check_wait_queue
fcoe_clean_pending_queue
fcoe_ctlr_destroy
fcoe_ctlr_device_add
fcoe_ctlr_device_delete
fcoe_ctlr_els_send
fcoe_ctlr_get_lesb
fcoe_ctlr_init
fcoe_ctlr_link_down
fcoe_ctlr_link_up
fcoe_ctlr_recv
fcoe_ctlr_recv_flogi
fcoe_fc_crc
fcoe_fcf_get_selected
fcoe_get_lesb
fcoe_get_paged_crc_eof
fcoe_get_wwn
fcoe_link_speed_update
fcoe_queue_timer
fcoe_start_io
fcoe_transport_attach
fcoe_transport_detach
fcoe_validate_vport_create
fcoe_wwn_from_mac
fcoe_wwn_to_str
fc_release_transport
fc_remote_port_add
fc_remote_port_delete
fc_remote_port_rolechg
fc_remove_host
fc_rport_terminate_io
fc_set_mfs
fc_set_rport_loss_tmo
fc_slave_alloc
fc_vport_create
fc_vport_id_lookup
fc_vport_setlink
fc_vport_terminate
__fdget
fd_install
__fentry__
fget
__fib_lookup
fib_table_lookup
filemap_fault
filp_close
filp_open
find_first_bit
find_first_zero_bit
find_get_pid
find_last_bit
find_next_bit
find_next_zero_bit
find_pid_ns
find_vma
finish_wait
firmware_request_nowarn
fixed_size_llseek
flow_keys_dissector
flush_delayed_work
flush_signals
flush_work
flush_workqueue
follow_pfn
force_sig
fortify_panic
fput
free_cpumask_var
free_fib_info
free_irq
free_irq_cpu_rmap
free_netdev
__free_pages
free_pages
free_percpu
from_kgid
from_kuid
fs_bio_set
gcd
generate_random_uuid
generic_end_io_acct
generic_handle_irq
generic_make_request
generic_start_io_acct
genlmsg_put
genl_register_family
genl_unregister_family
get_device
__get_free_pages
get_gendisk
get_pid_task
get_random_bytes
__get_task_comm
get_task_mm
get_task_pid
get_unused_fd_flags
__get_user_2
__get_user_4
__get_user_8
get_user_pages
get_user_pages_remote
get_zeroed_page
groups_alloc
groups_free
handle_simple_irq
hdmi_avi_infoframe_pack
hdmi_infoframe_pack
hrtimer_cancel
hrtimer_forward
hrtimer_init
hrtimer_start_range_ns
hrtimer_try_to_cancel
__hw_addr_sync_dev
__hw_addr_unsync_dev
hwmon_device_register
hwmon_device_register_with_groups
hwmon_device_register_with_info
hwmon_device_unregister
i2c_add_adapter
i2c_bit_add_bus
i2c_bit_algo
i2c_del_adapter
i2c_new_device
i2c_smbus_read_byte_data
i2c_smbus_write_byte_data
i2c_transfer
i2c_unregister_device
__ib_alloc_cq
ib_alloc_device
ib_alloc_odp_umem
__ib_alloc_pd
ib_attach_mcast
ib_cache_gid_parse_type_str
ib_cache_gid_type_str
ib_cancel_mad
ib_cm_init_qp_attr
ib_cm_insert_listen
ib_cm_listen
ib_cm_notify
ibcm_reject_msg
ib_copy_path_rec_from_user
ib_copy_path_rec_to_user
ib_copy_qp_attr_to_user
ib_create_ah_from_wc
ib_create_cm_id
__ib_create_cq
ib_create_qp
ib_create_qp_security
ib_create_send_mad
ib_dealloc_device
ib_dealloc_pd
ib_dealloc_xrcd
ib_destroy_cm_id
ib_destroy_cq
ib_destroy_qp
ib_destroy_rwq_ind_table
ib_destroy_wq
ib_detach_mcast
ib_dispatch_event
ib_find_cached_pkey
ib_free_cq
ib_free_recv_mad
ib_free_send_mad
ib_get_cached_pkey
ib_get_cached_port_state
ib_get_eth_speed
ib_get_gids_from_rdma_hdr
ib_get_mad_data_offset
ib_get_net_dev_by_params
ib_get_rdma_header_version
ib_get_rmpp_segment
ib_init_ah_attr_from_path
ib_init_ah_attr_from_wc
ib_init_ah_from_mcmember
ib_is_mad_class_rmpp
ib_mad_kernel_rmpp_agent
ib_modify_mad
ib_modify_port
ib_modify_qp
ib_modify_qp_is_ok
ib_modify_qp_with_udata
ibnl_put_attr
ibnl_put_msg
ib_open_qp
ib_post_send_mad
ib_process_cq_direct
ib_query_pkey
ib_query_port
ib_query_qp
ib_query_srq
ib_rdmacg_try_charge
ib_rdmacg_uncharge
ib_register_client
ib_register_device
ib_register_event_handler
ib_register_mad_agent
ib_response_mad
ib_sa_cancel_query
ib_sa_free_multicast
ib_sa_get_mcmember_rec
ib_sa_guid_info_rec_query
ib_sa_join_multicast
ib_sa_path_rec_get
ib_sa_register_client
ib_sa_sendonly_fullmem_support
ib_sa_unregister_client
ib_send_cm_apr
ib_send_cm_drep
ib_send_cm_dreq
ib_send_cm_lap
ib_send_cm_mra
ib_send_cm_rej
ib_send_cm_rep
ib_send_cm_req
ib_send_cm_rtu
ib_send_cm_sidr_rep
ib_send_cm_sidr_req
ib_set_client_data
ib_sg_to_pages
ib_ud_header_init
ib_ud_header_pack
ib_ud_ip4_csum
ib_umem_copy_from
ib_umem_get
ib_umem_odp_map_dma_pages
ib_umem_odp_unmap_dma_pages
ib_umem_page_count
ib_umem_release
ib_unregister_client
ib_unregister_device
ib_unregister_event_handler
ib_unregister_mad_agent
ib_uverbs_get_ucontext
ib_wc_status_msg
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_alloc_cyclic
idr_alloc_u32
idr_destroy
idr_find
idr_for_each
idr_get_next
idr_get_next_ul
idr_preload
idr_remove
idr_replace
igrab
in6_dev_finish_destroy
in_dev_finish_destroy
in_egroup_p
__inet6_lookup_established
inet_get_local_port_range
__inet_lookup_established
in_group_p
init_net
__init_rwsem
init_srcu_struct
init_task
init_timer_key
init_uts_ns
init_wait_entry
__init_waitqueue_head
interval_tree_insert
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_to_scsilun
invalidate_partition
iomem_resource
iommu_get_domain_for_dev
iommu_iova_to_phys
iommu_map
iommu_unmap
ioread16
ioread16be
ioread32
ioread32be
ioread8
ioremap_cache
ioremap_nocache
ioremap_wc
io_schedule
io_schedule_timeout
iounmap
iowrite16
iowrite32
iowrite32be
__iowrite32_copy
__iowrite64_copy
iowrite8
ip6_dst_hoplimit
ip_compute_csum
ip_mc_dec_group
ip_mc_inc_group
ipmi_create_user
ipmi_destroy_user
ipmi_free_recv_msg
ipmi_poll_interface
ipmi_request_settime
ipmi_set_gets_events
ipmi_set_my_address
ipmi_smi_msg_received
ipmi_unregister_smi
ipmi_validate_addr
ip_route_output_flow
ip_send_check
ip_tos2prio
iput
__ipv6_addr_type
ipv6_ext_hdr
ipv6_find_hdr
ipv6_skip_exthdr
ipv6_stub
irq_cpu_rmap_add
irq_create_mapping
__irq_domain_add
irq_domain_remove
irq_find_mapping
irq_modify_status
irq_poll_complete
irq_poll_disable
irq_poll_enable
irq_poll_init
irq_poll_sched
irq_set_affinity_hint
irq_set_affinity_notifier
irq_set_chip_and_handler_name
irq_to_desc
is_acpi_device_node
iscsi_boot_create_ethernet
iscsi_boot_create_host_kset
iscsi_boot_create_initiator
iscsi_boot_create_target
iscsi_boot_destroy_kset
__iscsi_complete_pdu
iscsi_complete_scsi_task
iscsi_conn_bind
iscsi_conn_failure
iscsi_conn_get_param
iscsi_conn_send_pdu
iscsi_conn_setup
iscsi_conn_start
iscsi_conn_stop
iscsi_conn_teardown
iscsi_create_endpoint
iscsi_create_iface
iscsi_destroy_endpoint
iscsi_destroy_iface
iscsi_eh_abort
iscsi_eh_device_reset
iscsi_eh_recover_target
iscsi_eh_session_reset
iscsi_get_port_speed_name
iscsi_get_port_state_name
__iscsi_get_task
iscsi_host_add
iscsi_host_alloc
iscsi_host_for_each_session
iscsi_host_free
iscsi_host_get_param
iscsi_host_remove
iscsi_itt_to_task
iscsi_lookup_endpoint
iscsi_offload_mesg
__iscsi_put_task
iscsi_put_task
iscsi_queuecommand
iscsi_register_transport
iscsi_session_failure
iscsi_session_get_param
iscsi_session_recovery_timedout
iscsi_session_setup
iscsi_session_teardown
iscsi_set_param
iscsi_suspend_queue
iscsi_target_alloc
iscsi_unregister_transport
is_uv_system
iw_cm_accept
iw_cm_connect
iw_cm_disconnect
iw_cm_init_qp_attr
iw_cm_listen
iw_cm_reject
iwcm_reject_msg
iw_create_cm_id
iw_destroy_cm_id
jiffies
jiffies_64
jiffies_to_msecs
jiffies_to_timespec64
jiffies_to_usecs
kallsyms_lookup_name
kasprintf
kernel_fpu_begin
kernel_fpu_end
kernel_recvmsg
kernel_sendmsg
kernel_setsockopt
kfree
kfree_call_rcu
kfree_const
kfree_skb
kgdb_active
kgdb_breakpoint
kill_fasync
__kmalloc
kmalloc_caches
__kmalloc_node
kmalloc_order_trace
kmem_cache_alloc
kmem_cache_alloc_node
kmem_cache_alloc_node_trace
kmem_cache_alloc_trace
kmem_cache_create
kmem_cache_create_usercopy
kmem_cache_destroy
kmem_cache_free
kmem_cache_shrink
kmemdup
kobject_add
kobject_create_and_add
kobject_del
kobject_get
kobject_init
kobject_init_and_add
kobject_put
kobject_set_name
kobject_uevent
kobject_uevent_env
krealloc
kset_create_and_add
kset_find_obj
kset_register
kset_unregister
ksize
kstrdup
kstrdup_const
kstrndup
kstrtobool
kstrtobool_from_user
kstrtoint
kstrtoint_from_user
kstrtoll
kstrtoll_from_user
kstrtou16
kstrtou8
kstrtouint
kstrtouint_from_user
kstrtoul_from_user
kstrtoull
kstrtoull_from_user
kthread_bind
kthread_create_on_node
kthread_park
kthread_should_stop
kthread_stop
kthread_unpark
ktime_get
ktime_get_coarse_real_ts64
ktime_get_raw
ktime_get_raw_ts64
ktime_get_real_seconds
ktime_get_real_ts64
ktime_get_seconds
ktime_get_ts64
ktime_get_with_offset
kvasprintf
kvfree
kvmalloc_node
kzfree
led_classdev_resume
led_classdev_suspend
led_classdev_unregister
libfc_vport_create
__list_add_valid
__list_del_entry_valid
llist_add_batch
__local_bh_enable_ip
__lock_page
lock_page_memcg
lockref_get
lock_sock_nested
make_kgid
make_kuid
mark_page_accessed
match_strdup
match_string
match_token
mdio45_probe
mdiobus_alloc_size
mdiobus_free
mdiobus_get_phy
__mdiobus_register
mdiobus_unregister
mdio_mii_ioctl
memchr
memchr_inv
memcmp
memcpy
memdup_user
memdup_user_nul
memmove
memory_read_from_buffer
memparse
mempool_alloc
mempool_alloc_slab
mempool_create
mempool_create_node
mempool_destroy
mempool_free
mempool_free_slab
mempool_kfree
mempool_kmalloc
memscan
mem_section
memset
memzero_explicit
metadata_dst_alloc
mfd_add_devices
mfd_remove_devices
misc_deregister
misc_register
mlxfw_firmware_flash
__mmdrop
mmput
__mmu_notifier_register
mmu_notifier_register
mmu_notifier_unregister
mmu_notifier_unregister_no_release
mod_delayed_work_on
mod_timer
mod_timer_pending
__module_get
module_layout
module_put
module_refcount
__msecs_to_jiffies
msleep
msleep_interruptible
__mutex_init
mutex_lock
mutex_lock_interruptible
mutex_lock_killable
mutex_trylock
mutex_unlock
mxm_wmi_call_mxds
mxm_wmi_call_mxmx
mxm_wmi_supported
__napi_alloc_skb
napi_complete_done
napi_consume_skb
napi_disable
napi_get_frags
napi_gro_flush
napi_gro_frags
napi_gro_receive
napi_hash_del
__napi_schedule
__napi_schedule_irqoff
napi_schedule_prep
__ndelay
ndo_dflt_bridge_getlink
ndo_dflt_fdb_add
nd_tbl
neigh_destroy
__neigh_event_send
neigh_lookup
netdev_alloc_frag
__netdev_alloc_skb
netdev_bind_sb_channel_queue
netdev_crit
netdev_err
netdev_features_change
netdev_info
netdev_lower_get_next
netdev_master_upper_dev_get
netdev_master_upper_dev_get_rcu
netdev_notice
netdev_printk
netdev_reset_tc
netdev_rss_key_fill
netdev_rx_handler_register
netdev_rx_handler_unregister
netdev_set_num_tc
netdev_set_sb_channel
netdev_set_tc_queue
netdev_unbind_sb_channel
netdev_update_features
netdev_walk_all_upper_dev_rcu
netdev_warn
netif_carrier_off
netif_carrier_on
netif_device_attach
netif_device_detach
netif_get_num_default_rss_queues
netif_napi_add
netif_napi_del
netif_receive_skb
netif_rx
netif_schedule_queue
netif_set_real_num_rx_queues
netif_set_real_num_tx_queues
netif_set_xps_queue
netif_tx_stop_all_queues
netif_tx_wake_queue
netlink_broadcast
netlink_unicast
net_ratelimit
nla_find
nla_parse
nla_put
nla_validate
node_data
__node_distance
node_states
node_to_cpumask_map
no_llseek
nonseekable_open
noop_llseek
nr_cpu_ids
nr_node_ids
nsecs_to_jiffies
ns_to_timespec
ns_to_timespec64
ns_to_timeval
numa_node
nvme_alloc_request
nvme_cancel_request
nvme_change_ctrl_state
nvme_cleanup_cmd
nvme_complete_async_event
nvme_complete_rq
nvme_disable_ctrl
nvme_enable_ctrl
nvme_fc_register_localport
nvme_fc_register_remoteport
nvme_fc_set_remoteport_devloss
nvme_fc_unregister_localport
nvme_fc_unregister_remoteport
nvme_init_ctrl
nvme_init_identify
nvme_io_timeout
nvme_kill_queues
nvme_remove_namespaces
nvme_reset_ctrl
nvme_reset_ctrl_sync
nvme_set_queue_count
nvme_setup_cmd
nvme_shutdown_ctrl
nvme_start_ctrl
nvme_start_freeze
nvme_start_queues
nvme_stop_ctrl
nvme_stop_queues
nvme_submit_sync_cmd
nvmet_fc_rcv_fcp_abort
nvmet_fc_rcv_fcp_req
nvmet_fc_rcv_ls_req
nvmet_fc_register_targetport
nvmet_fc_unregister_targetport
nvme_unfreeze
nvme_uninit_ctrl
nvme_wait_freeze
nvme_wait_freeze_timeout
nvme_wq
of_led_classdev_register
on_each_cpu
orderly_poweroff
out_of_line_wait_on_bit
out_of_line_wait_on_bit_lock
override_creds
__page_file_index
__page_frag_cache_drain
page_frag_free
__page_mapcount
page_mapped
page_offset_base
page_pool_alloc_pages
page_pool_create
page_pool_destroy
__page_pool_put_page
pagevec_lookup_range
pagevec_lookup_range_tag
__pagevec_release
panic
panic_notifier_list
param_array_ops
param_get_int
param_ops_bool
param_ops_byte
param_ops_charp
param_ops_int
param_ops_long
param_ops_short
param_ops_string
param_ops_uint
param_ops_ullong
param_ops_ulong
param_ops_ushort
param_set_bool
param_set_int
pat_enabled
pci_alloc_irq_vectors_affinity
pci_assign_unassigned_bus_resources
pcibios_resource_to_bus
pci_bus_resource_n
pci_bus_type
pci_cfg_access_lock
pci_cfg_access_unlock
pci_choose_state
pci_cleanup_aer_uncorrect_error_status
pci_clear_master
pci_clear_mwi
pci_d3cold_disable
pci_dev_driver
pci_dev_get
pci_device_is_present
pci_dev_present
pci_dev_put
pci_disable_device
pci_disable_link_state
pci_disable_msi
pci_disable_msix
pci_disable_pcie_error_reporting
pci_disable_rom
pci_disable_sriov
pcie_bandwidth_available
pcie_capability_clear_and_set_word
pcie_capability_read_dword
pcie_capability_read_word
pcie_capability_write_word
pcie_flr
pcie_get_speed_cap
pcie_get_width_cap
pci_enable_atomic_ops_to_root
pci_enable_device
pci_enable_device_mem
pci_enable_msi
pci_enable_msix_range
pci_enable_pcie_error_reporting
pci_enable_rom
pci_enable_sriov
pci_enable_wake
pcie_print_link_status
pcie_relaxed_ordering_enabled
pcie_set_readrq
pci_find_capability
pci_find_ext_capability
pci_free_irq
pci_free_irq_vectors
pci_get_class
pci_get_device
pci_get_domain_bus_and_slot
pci_get_slot
pci_ignore_hotplug
pci_intx
pci_iomap
pci_ioremap_bar
pci_iounmap
pci_irq_get_affinity
pci_irq_vector
pci_map_rom
pcim_enable_device
pcim_iomap_regions
pcim_iomap_table
pci_num_vf
pci_platform_rom
pci_prepare_to_sleep
pci_read_config_byte
pci_read_config_dword
pci_read_config_word
pci_read_vpd
__pci_register_driver
pci_release_regions
pci_release_resource
pci_release_selected_regions
pci_request_irq
pci_request_regions
pci_request_selected_regions
pci_rescan_bus
pci_resize_resource
pci_restore_state
pci_save_state
pci_select_bars
pci_set_master
pci_set_mwi
pci_set_power_state
pci_sriov_configure_simple
pci_sriov_get_totalvfs
pci_sriov_set_totalvfs
pci_stop_and_remove_bus_device
pci_stop_and_remove_bus_device_locked
pci_try_set_mwi
pci_unmap_rom
pci_unregister_driver
pci_vfs_assigned
pci_vpd_find_info_keyword
pci_vpd_find_tag
pci_wait_for_pending_transaction
pci_wake_from_d3
pci_walk_bus
pci_write_config_byte
pci_write_config_dword
pci_write_config_word
pcix_set_mmrbc
PDE_DATA
__per_cpu_offset
perf_tp_event
perf_trace_buf_alloc
perf_trace_run_bpf_submit
pgprot_writecombine
phy_attached_info
phy_connect
phy_disconnect
phy_ethtool_ksettings_get
phy_ethtool_ksettings_set
phy_ethtool_sset
phy_mii_ioctl
phys_base
physical_mask
phy_start
phy_start_aneg
phy_stop
pid_task
pid_vnr
platform_bus_type
pm_genpd_add_device
pm_genpd_init
pm_genpd_remove_device
pm_power_off
pm_runtime_allow
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
__pm_runtime_idle
__pm_runtime_resume
pm_runtime_set_autosuspend_delay
__pm_runtime_set_status
__pm_runtime_suspend
__pm_runtime_use_autosuspend
pm_schedule_suspend
pm_vt_switch_required
pm_vt_switch_unregister
power_supply_is_system_supplied
prandom_bytes
prandom_seed
prandom_u32
__preempt_count
prepare_creds
prepare_to_wait
prepare_to_wait_event
prepare_to_wait_exclusive
print_hex_dump
printk
__printk_ratelimit
print_stack_trace
proc_create_data
proc_dointvec
proc_mkdir
proc_mkdir_mode
proc_remove
proc_symlink
__pskb_copy_fclone
pskb_expand_head
__pskb_pull_tail
___pskb_trim
ptp_clock_event
ptp_clock_index
ptp_clock_register
ptp_clock_unregister
ptp_find_pin
__put_cred
put_device
__put_devmap_managed_page
put_disk
__put_net
__put_page
put_pid
__put_task_struct
put_unused_fd
__put_user_1
__put_user_2
__put_user_4
__put_user_8
pv_cpu_ops
pv_irq_ops
pv_lock_ops
pv_mmu_ops
qed_get_eth_ops
qed_put_eth_ops
queue_delayed_work_on
queued_read_lock_slowpath
queued_write_lock_slowpath
queue_work_on
radix_tree_delete
radix_tree_gang_lookup
__radix_tree_insert
radix_tree_iter_delete
radix_tree_lookup
radix_tree_lookup_slot
radix_tree_next_chunk
__radix_tree_next_slot
radix_tree_preload
radix_tree_tagged
raid_class_attach
raid_class_release
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
raw_notifier_chain_unregister
_raw_read_lock
_raw_read_lock_bh
_raw_read_lock_irq
_raw_read_lock_irqsave
_raw_read_unlock_bh
_raw_read_unlock_irqrestore
_raw_spin_lock
_raw_spin_lock_bh
_raw_spin_lock_irq
_raw_spin_lock_irqsave
_raw_spin_trylock
_raw_spin_unlock_bh
_raw_spin_unlock_irqrestore
_raw_write_lock
_raw_write_lock_bh
_raw_write_lock_irq
_raw_write_lock_irqsave
_raw_write_unlock_bh
_raw_write_unlock_irqrestore
rb_erase
__rb_erase_color
rb_first
rb_first_postorder
__rb_insert_augmented
rb_insert_color
rb_next
rb_next_postorder
rb_replace_node
rbt_ib_umem_for_each_in_range
rbt_ib_umem_lookup
rcu_barrier
rdma_addr_cancel
rdma_addr_size
rdma_create_ah
rdma_create_user_ah
rdma_destroy_ah
rdma_destroy_ah_attr
rdma_find_gid
rdma_find_gid_by_port
rdma_get_gid_attr
rdma_is_zero_gid
rdma_move_ah_attr
rdma_nl_multicast
rdma_nl_register
rdma_nl_unicast
rdma_nl_unicast_wait
rdma_nl_unregister
rdma_node_get_transport
rdma_port_get_link_layer
rdma_put_gid_attr
rdma_query_ah
rdma_query_gid
rdma_resolve_ip
rdma_restrack_del
rdma_roce_rescan_device
rdma_set_cq_moderation
rdma_translate_ip
read_cache_pages
recalc_sigpending
refcount_dec_and_mutex_lock
refcount_dec_and_test_checked
refcount_inc_checked
refcount_inc_not_zero_checked
register_acpi_notifier
register_blkdev
__register_chrdev
register_chrdev_region
register_fib_notifier
register_inet6addr_notifier
register_inetaddr_notifier
register_netdev
register_netdevice_notifier
register_netevent_notifier
register_net_sysctl
__register_nmi_handler
register_pernet_subsys
register_reboot_notifier
release_firmware
release_pages
__release_region
release_sock
remap_pfn_range
remove_conflicting_framebuffers
remove_proc_entry
remove_wait_queue
request_firmware
request_firmware_direct
request_firmware_nowait
__request_module
__request_region
request_threaded_irq
reservation_object_add_excl_fence
reservation_object_add_shared_fence
reservation_object_get_fences_rcu
reservation_object_reserve_shared
reservation_object_wait_timeout_rcu
reservation_ww_class
reset_devices
revert_creds
rhashtable_destroy
rhashtable_free_and_destroy
rhashtable_init
rhashtable_insert_slow
rhashtable_walk_enter
rhashtable_walk_exit
rhashtable_walk_next
rhashtable_walk_start_check
rhashtable_walk_stop
rhltable_init
rht_bucket_nested
rht_bucket_nested_insert
ring_buffer_event_data
roce_gid_type_mask_support
round_jiffies
round_jiffies_relative
rps_may_expire_flow
rt6_lookup
rtc_time64_to_tm
rtnl_is_locked
rtnl_lock
rtnl_trylock
rtnl_unlock
sas_attach_transport
sas_disable_tlr
sas_enable_tlr
sas_end_device_alloc
sas_expander_alloc
sas_is_tlr_enabled
sas_phy_add
sas_phy_alloc
sas_phy_free
sas_port_add
sas_port_add_phy
sas_port_alloc_num
sas_port_delete
sas_port_delete_phy
sas_port_free
sas_read_port_mode_page
sas_release_transport
sas_remove_host
sas_rphy_add
save_stack_trace
save_stack_trace_tsk
sbitmap_queue_clear
__sbitmap_queue_get
sched_setscheduler
schedule
schedule_hrtimeout
schedule_hrtimeout_range
schedule_timeout
schedule_timeout_interruptible
schedule_timeout_uninterruptible
scmd_printk
scnprintf
screen_info
scsi_add_device
scsi_add_host_with_dma
scsi_block_requests
scsi_build_sense_buffer
scsi_change_queue_depth
scsi_device_get
scsi_device_lookup
scsi_device_put
scsi_device_set_state
scsi_device_type
scsi_dma_map
scsi_dma_unmap
__scsi_execute
scsi_get_vpd_page
scsi_host_alloc
scsi_host_busy
scsi_host_get
scsi_host_lookup
scsi_host_put
scsi_internal_device_block_nowait
scsi_internal_device_unblock_nowait
scsi_is_fc_rport
scsi_is_host_device
scsi_is_sdev_device
__scsi_iterate_devices
scsilun_to_int
scsi_normalize_sense
scsi_print_command
scsi_register_driver
scsi_remove_device
scsi_remove_host
scsi_remove_target
scsi_sanitize_inquiry_string
scsi_scan_host
scsi_track_queue_full
scsi_unblock_requests
sdev_prefix_printk
secpath_dup
security_d_instantiate
send_sig
seq_lseek
seq_open
seq_printf
seq_putc
seq_puts
seq_read
seq_release
seq_write
set_cpus_allowed_ptr
set_current_groups
set_device_ro
set_disk_ro
set_freezable
set_memory_array_uc
set_memory_array_wb
set_memory_uc
set_memory_wb
set_memory_wc
set_normalized_timespec
set_normalized_timespec64
set_page_dirty
set_page_dirty_lock
set_user_nice
sg_alloc_table_from_pages
sg_copy_from_buffer
sg_copy_to_buffer
sg_free_table
sg_init_table
sg_miter_next
sg_miter_start
sg_miter_stop
sg_next
show_class_attr_string
sigprocmask
si_meminfo
simple_open
simple_read_from_buffer
simple_strtol
simple_strtoul
simple_strtoull
simple_write_to_buffer
single_open
single_release
skb_add_rx_frag
skb_checksum
skb_checksum_help
skb_clone
skb_clone_tx_timestamp
skb_copy
skb_copy_bits
skb_copy_expand
skb_dequeue
__skb_flow_dissect
__skb_get_hash
__skb_gso_segment
skb_gso_validate_mac_len
__skb_pad
skb_pull
skb_push
skb_put
skb_queue_purge
skb_queue_tail
skb_realloc_headroom
skb_store_bits
skb_trim
skb_tstamp_tx
skb_vlan_pop
sme_active
sme_me_mask
smp_call_function_many
smp_call_function_single
snprintf
sn_rtc_cycles_per_second
sock_create_kern
sock_edemux
sock_release
softnet_data
sort
sprintf
__srcu_read_lock
__srcu_read_unlock
sscanf
__stack_chk_fail
starget_for_each_device
strcasecmp
strcat
strchr
strcmp
strcpy
strcspn
strim
strlcat
strlcpy
strlen
strncasecmp
strncat
strncmp
strncpy
strncpy_from_user
strnlen
strnstr
strpbrk
strrchr
strscpy
strsep
strspn
strstr
submit_bio
__sw_hweight32
__sw_hweight64
swiotlb_nr_tbl
switchdev_port_same_parent_id
__symbol_get
__symbol_put
sync_file_create
synchronize_irq
synchronize_net
synchronize_sched
synchronize_srcu
sysfs_add_file_to_group
sysfs_create_bin_file
sysfs_create_file_ns
sysfs_create_group
sysfs_format_mac
sysfs_remove_bin_file
sysfs_remove_file_from_group
sysfs_remove_file_ns
sysfs_remove_group
sysfs_streq
system_state
system_unbound_wq
system_wq
sys_tz
task_active_pid_ns
tasklet_init
tasklet_kill
__tasklet_schedule
__task_pid_nr_ns
tcf_block_cb_register
tcf_block_cb_unregister
tcp_gro_complete
tcp_hashinfo
tc_setup_cb_egdev_register
tc_setup_cb_egdev_unregister
this_cpu_off
time64_to_tm
timecounter_cyc2time
timecounter_init
timecounter_read
to_drm_sched_fence
trace_define_field
trace_event_buffer_commit
trace_event_buffer_lock_reserve
trace_event_buffer_reserve
trace_event_ignore_this_pid
trace_event_raw_init
trace_event_reg
trace_handle_return
__tracepoint_dma_fence_emit
__tracepoint_xdp_exception
trace_print_flags_seq
trace_raw_output_prep
trace_seq_printf
trace_seq_putc
try_module_get
try_wait_for_completion
tsc_khz
ttm_bo_add_to_lru
ttm_bo_clean_mm
ttm_bo_del_sub_from_lru
ttm_bo_device_init
ttm_bo_device_release
ttm_bo_dma_acc_size
ttm_bo_eviction_valuable
ttm_bo_evict_mm
ttm_bo_global_init
ttm_bo_global_release
ttm_bo_init
ttm_bo_init_mm
ttm_bo_init_reserved
ttm_bo_kmap
ttm_bo_kunmap
ttm_bo_lock_delayed_workqueue
ttm_bo_manager_func
ttm_bo_mem_put
ttm_bo_mem_space
ttm_bo_mmap
ttm_bo_move_accel_cleanup
ttm_bo_move_memcpy
ttm_bo_move_to_lru_tail
ttm_bo_move_ttm
ttm_bo_pipeline_move
ttm_bo_put
ttm_bo_unlock_delayed_workqueue
ttm_bo_validate
ttm_bo_wait
ttm_dma_page_alloc_debugfs
ttm_dma_populate
ttm_dma_tt_fini
ttm_dma_tt_init
ttm_dma_unpopulate
ttm_eu_backoff_reservation
ttm_eu_fence_buffer_objects
ttm_eu_reserve_buffers
ttm_fbdev_mmap
ttm_mem_global_init
ttm_mem_global_release
ttm_page_alloc_debugfs
ttm_pool_populate
ttm_pool_unpopulate
ttm_populate_and_map_pages
ttm_sg_tt_init
ttm_tt_bind
ttm_tt_set_placement_caching
ttm_unmap_and_unpopulate_pages
__udelay
udp4_hwcsum
uio_event_notify
__uio_register_device
uio_unregister_device
unlock_page
unlock_page_memcg
unmap_mapping_range
unregister_acpi_notifier
unregister_blkdev
__unregister_chrdev
unregister_chrdev_region
unregister_fib_notifier
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netdev
unregister_netdevice_notifier
unregister_netevent_notifier
unregister_net_sysctl_table
unregister_nmi_handler
unregister_pernet_subsys
unregister_reboot_notifier
unuse_mm
up
up_read
up_write
__usecs_to_jiffies
use_mm
usleep_range
__uv_cpu_info
_uverbs_alloc
uverbs_copy_to
uverbs_destroy_def_handler
uverbs_get_flags32
uverbs_get_flags64
uverbs_idr_class
__uv_hub_info_list
uv_possible_blades
uv_setup_irq
uv_teardown_irq
vfree
vga_client_register
vgacon_text_force
vga_set_legacy_decoding
vga_switcheroo_client_fb_set
vga_switcheroo_client_probe_defer
vga_switcheroo_fini_domain_pm_ops
vga_switcheroo_handler_flags
vga_switcheroo_init_domain_pm_ops
vga_switcheroo_lock_ddc
vga_switcheroo_process_delayed_switch
vga_switcheroo_register_client
vga_switcheroo_register_handler
vga_switcheroo_unlock_ddc
vga_switcheroo_unregister_client
vga_switcheroo_unregister_handler
vga_tryget
__virt_addr_valid
vlan_dev_real_dev
vlan_dev_vlan_id
vlan_dev_vlan_proto
vmalloc
__vmalloc
vmalloc_base
vmalloc_node
vmalloc_to_page
vmap
vmemmap_base
vm_get_page_prot
vm_insert_page
vm_mmap
vm_munmap
vprintk
vscnprintf
vsnprintf
vsprintf
vunmap
vzalloc
vzalloc_node
wait_for_completion
wait_for_completion_interruptible
wait_for_completion_interruptible_timeout
wait_for_completion_io_timeout
wait_for_completion_killable
wait_for_completion_timeout
wait_on_page_bit
__wake_up
wake_up_bit
__wake_up_locked
wake_up_process
__warn_printk
wmi_evaluate_method
wmi_has_guid
work_busy
write_cache_pages
ww_mutex_lock
ww_mutex_lock_interruptible
ww_mutex_unlock
x86_cpu_to_apicid
x86_dma_fallback_dev
__x86_indirect_thunk_r10
__x86_indirect_thunk_r11
__x86_indirect_thunk_r12
__x86_indirect_thunk_r13
__x86_indirect_thunk_r14
__x86_indirect_thunk_r15
__x86_indirect_thunk_r8
__x86_indirect_thunk_r9
__x86_indirect_thunk_rax
__x86_indirect_thunk_rbp
__x86_indirect_thunk_rbx
__x86_indirect_thunk_rcx
__x86_indirect_thunk_rdi
__x86_indirect_thunk_rdx
__x86_indirect_thunk_rsi
xdp_do_flush_map
xdp_do_redirect
xdp_return_frame
xdp_return_frame_rx_napi
xdp_rxq_info_is_reg
xdp_rxq_info_reg
xdp_rxq_info_reg_mem_model
xdp_rxq_info_unreg
xdp_rxq_info_unused
xfrm_replay_seqhi
xz_dec_end
xz_dec_init
xz_dec_run
yield
zalloc_cpumask_var
zap_vma_ptes
zlib_inflate
zlib_inflateEnd
zlib_inflateInit2
zlib_inflate_workspacesize
2
2
yanl1229(a)163.com
1
0

[PATCH openEuler-21.03 1/2] mm: add pin memory method for checkpoint add restore
by hejingxian 02 Mar '21
by hejingxian 02 Mar '21
02 Mar '21
From: Jingxian He <hejingxian(a)huawei.com>
Date: Mon, 1 Mar 2021 17:35:32 +0800
Subject: [PATCH openEuler-21.03 1/2] mm: add pin memory method for checkpoint add restore
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
We can use the checkpoint and restore in userspace(criu) method to
dump and restore tasks when updating the kernel.
Currently, criu needs dump all memory data of tasks to files.
When the memory size is very large(larger than 1G),
the cost time of the dumping data will be very long(more than 1 min).
By pin the memory data of tasks and collect the corresponding
physical pages mapping info in checkpoint process,
we can remap the physical pages to restore tasks after
upgrading the kernel. This pin memory method can
restore the task data within one second.
The pin memory area info is saved in the reserved memblock,
which can keep usable in the kernel update process.
The pin memory driver provides the following ioctl command for criu:
1) SET_PIN_MEM_AREA:
Set pin memory area, which can be remap to the restore task.
2) CLEAR_PIN_MEM_AREA:
Clear the pin memory area info,
which enable user reset the pin data.
3) REMAP_PIN_MEM_AREA:
Remap the pages of the pin memory to the restore task.
Signed-off-by: Jingxian He <hejingxian(a)huawei.com>
Reviewed-by: Wenliang He <hewenliang4(a)huawei.com>
Reviewed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +
arch/arm64/kernel/setup.c | 9 +
arch/arm64/mm/init.c | 60 +++
drivers/char/Kconfig | 6 +
drivers/char/Makefile | 1 +
drivers/char/pin_memory.c | 208 ++++++++
include/linux/crash_core.h | 5 +
include/linux/pin_mem.h | 78 +++
kernel/crash_core.c | 11 +
mm/Kconfig | 8 +
mm/Makefile | 1 +
mm/huge_memory.c | 61 +++
mm/memory.c | 59 ++
mm/pin_mem.c | 950 +++++++++++++++++++++++++++++++++
14 files changed, 1459 insertions(+)
create mode 100644 drivers/char/pin_memory.c
create mode 100644 include/linux/pin_mem.h
create mode 100644 mm/pin_mem.c
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index c5271e7..76fda68 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1036,6 +1036,7 @@ CONFIG_FRAME_VECTOR=y
# CONFIG_GUP_BENCHMARK is not set
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
+CONFIG_PIN_MEMORY=y
# end of Memory Management options
CONFIG_NET=y
@@ -3282,6 +3283,7 @@ CONFIG_TCG_TIS_ST33ZP24_SPI=y
# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
+CONFIG_PIN_MEMORY_DEV=m
#
# I2C support
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index c1f1fb9..5e282d3 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -50,6 +50,9 @@
#include <asm/efi.h>
#include <asm/xen/hypervisor.h>
#include <asm/mmu_context.h>
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/pin_mem.h>
+#endif
static int num_standard_resources;
static struct resource *standard_resources;
@@ -260,6 +263,12 @@ static void __init request_standard_resources(void)
quick_kexec_res.end <= res->end)
request_resource(res, &quick_kexec_res);
#endif
+#ifdef CONFIG_PIN_MEMORY
+ if (pin_memory_resource.end &&
+ pin_memory_resource.start >= res->start &&
+ pin_memory_resource.end <= res->end)
+ request_resource(res, &pin_memory_resource);
+#endif
}
}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f3e5a66..8ab5aac 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -42,6 +42,9 @@
#include <linux/sizes.h>
#include <asm/tlb.h>
#include <asm/alternative.h>
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/pin_mem.h>
+#endif
#define ARM64_ZONE_DMA_BITS 30
@@ -78,6 +81,55 @@ static void __init reserve_crashkernel(void)
*/
#define MAX_USABLE_RANGES 2
+#ifdef CONFIG_PIN_MEMORY
+struct resource pin_memory_resource = {
+ .name = "Pin memory",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_RESERVED
+};
+
+static void __init reserve_pin_memory_res(void)
+{
+ unsigned long long mem_start, mem_len;
+ int ret;
+
+ ret = parse_pin_memory(boot_command_line, memblock_phys_mem_size(),
+ &mem_len, &mem_start);
+ if (ret || !mem_len)
+ return;
+
+ mem_len = PAGE_ALIGN(mem_len);
+
+ if (!memblock_is_region_memory(mem_start, mem_len)) {
+ pr_warn("cannot reserve for pin memory: region is not memory!\n");
+ return;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_len)) {
+ pr_warn("cannot reserve for pin memory: region overlaps reserved memory!\n");
+ return;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve for pin memory: base address is not 2MB aligned\n");
+ return;
+ }
+
+ memblock_reserve(mem_start, mem_len);
+ pr_debug("pin memory resource reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ mem_start, mem_start + mem_len, mem_len >> 20);
+
+ pin_memory_resource.start = mem_start;
+ pin_memory_resource.end = mem_start + mem_len - 1;
+}
+#else
+static void __init reserve_pin_memory_res(void)
+{
+}
+#endif /* CONFIG_PIN_MEMORY */
+
#ifdef CONFIG_CRASH_DUMP
static int __init early_init_dt_scan_elfcorehdr(unsigned long node,
const char *uname, int depth, void *data)
@@ -455,6 +507,8 @@ void __init arm64_memblock_init(void)
reserve_park_mem();
#endif
+ reserve_pin_memory_res();
+
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
@@ -583,6 +637,12 @@ void __init mem_init(void)
/* this will put all unused low memory onto the freelists */
memblock_free_all();
+#ifdef CONFIG_PIN_MEMORY
+ /* pre alloc the pages for pin memory */
+ init_reserve_page_map((unsigned long)pin_memory_resource.start,
+ (unsigned long)(pin_memory_resource.end - pin_memory_resource.start + 1));
+#endif
+
mem_init_print_info(NULL);
/*
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index d229a2d..fbb94b8 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -496,3 +496,9 @@ config RANDOM_TRUST_BOOTLOADER
booloader is trustworthy so it will be added to the kernel's entropy
pool. Otherwise, say N here so it will be regarded as device input that
only mixes the entropy pool.
+
+config PIN_MEMORY_DEV
+ bool "/dev/pinmem character device"
+ default m
+ help
+ pin memory driver
diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index ffce287..71d76fd 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -47,3 +47,4 @@ obj-$(CONFIG_PS3_FLASH) += ps3flash.o
obj-$(CONFIG_XILLYBUS) += xillybus/
obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o
obj-$(CONFIG_ADI) += adi.o
+obj-$(CONFIG_PIN_MEMORY_DEV) += pin_memory.o
diff --git a/drivers/char/pin_memory.c b/drivers/char/pin_memory.c
new file mode 100644
index 0000000..f46e056
--- /dev/null
+++ b/drivers/char/pin_memory.c
@@ -0,0 +1,208 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved.
+ * Pin memory driver for checkpoint and restore.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/miscdevice.h>
+#include <linux/fs.h>
+#include <linux/mm_types.h>
+#include <linux/processor.h>
+#include <uapi/asm-generic/ioctl.h>
+#include <uapi/asm-generic/mman-common.h>
+#include <uapi/asm/setup.h>
+#include <linux/pin_mem.h>
+#include <linux/sched/mm.h>
+
+#define MAX_PIN_MEM_AREA_NUM 16
+struct _pin_mem_area {
+ unsigned long virt_start;
+ unsigned long virt_end;
+};
+
+struct pin_mem_area_set {
+ unsigned int pid;
+ unsigned int area_num;
+ struct _pin_mem_area mem_area[MAX_PIN_MEM_AREA_NUM];
+};
+
+#define PIN_MEM_MAGIC 0x59
+#define _SET_PIN_MEM_AREA 1
+#define _CLEAR_PIN_MEM_AREA 2
+#define _REMAP_PIN_MEM_AREA 3
+#define _FINISH_PIN_MEM_DUMP 4
+#define _PIN_MEM_IOC_MAX_NR 4
+#define SET_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _SET_PIN_MEM_AREA, struct pin_mem_area_set)
+#define CLEAR_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _CLEAR_PIN_MEM_AREA, int)
+#define REMAP_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _REMAP_PIN_MEM_AREA, int)
+#define FINISH_PIN_MEM_DUMP _IOW(PIN_MEM_MAGIC, _FINISH_PIN_MEM_DUMP, int)
+static int set_pin_mem(struct pin_mem_area_set *pmas)
+{
+ int i;
+ int ret = 0;
+ struct _pin_mem_area *pma;
+ struct mm_struct *mm;
+ struct task_struct *task;
+ struct pid *pid_s;
+
+ pid_s = find_get_pid(pmas->pid);
+ if (!pid_s) {
+ pr_warn("Get pid struct fail:%d.\n", pmas->pid);
+ return -EFAULT;
+ }
+ rcu_read_lock();
+ task = pid_task(pid_s, PIDTYPE_PID);
+ if (!task) {
+ pr_warn("Get task struct fail:%d.\n", pmas->pid);
+ goto fail;
+ }
+ mm = get_task_mm(task);
+ for (i = 0; i < pmas->area_num; i++) {
+ pma = &(pmas->mem_area[i]);
+ ret = pin_mem_area(task, mm, pma->virt_start, pma->virt_end);
+ if (ret) {
+ mmput(mm);
+ goto fail;
+ }
+ }
+ mmput(mm);
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return ret;
+
+fail:
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return -EFAULT;
+}
+
+static int set_pin_mem_area(unsigned long arg)
+{
+ struct pin_mem_area_set pmas;
+ void __user *buf = (void __user *)arg;
+
+ if (!access_ok(buf, sizeof(pmas)))
+ return -EFAULT;
+ if (copy_from_user(&pmas, buf, sizeof(pmas)))
+ return -EINVAL;
+ if (pmas.area_num > MAX_PIN_MEM_AREA_NUM) {
+ pr_warn("Input area_num is too large.\n");
+ return -EINVAL;
+ }
+
+ return set_pin_mem(&pmas);
+}
+
+static int pin_mem_remap(unsigned long arg)
+{
+ int pid;
+ struct task_struct *task;
+ struct mm_struct *mm;
+ vm_fault_t ret;
+ void __user *buf = (void __user *)arg;
+ struct pid *pid_s;
+
+ if (!access_ok(buf, sizeof(int)))
+ return -EINVAL;
+ if (copy_from_user(&pid, buf, sizeof(int)))
+ return -EINVAL;
+
+ pid_s = find_get_pid(pid);
+ if (!pid_s) {
+ pr_warn("Get pid struct fail:%d.\n", pid);
+ return -EINVAL;
+ }
+ rcu_read_lock();
+ task = pid_task(pid_s, PIDTYPE_PID);
+ if (!task) {
+ pr_warn("Get task struct fail:%d.\n", pid);
+ goto fault;
+ }
+ mm = get_task_mm(task);
+ ret = do_mem_remap(pid, mm);
+ if (ret) {
+ pr_warn("Handle pin memory remap fail.\n");
+ mmput(mm);
+ goto fault;
+ }
+ mmput(mm);
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return 0;
+
+fault:
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return -EFAULT;
+}
+
+static long pin_memory_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ long ret = 0;
+
+ if (_IOC_TYPE(cmd) != PIN_MEM_MAGIC)
+ return -EINVAL;
+ if (_IOC_NR(cmd) > _PIN_MEM_IOC_MAX_NR)
+ return -EINVAL;
+
+ switch (cmd) {
+ case SET_PIN_MEM_AREA:
+ ret = set_pin_mem_area(arg);
+ break;
+ case CLEAR_PIN_MEM_AREA:
+ clear_pin_memory_record();
+ break;
+ case REMAP_PIN_MEM_AREA:
+ ret = pin_mem_remap(arg);
+ break;
+ case FINISH_PIN_MEM_DUMP:
+ ret = finish_pin_mem_dump();
+ break;
+ default:
+ return -EINVAL;
+ }
+ return ret;
+}
+
+static const struct file_operations pin_memory_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = pin_memory_ioctl,
+ .compat_ioctl = pin_memory_ioctl,
+};
+
+static struct miscdevice pin_memory_miscdev = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "pinmem",
+ .fops = &pin_memory_fops,
+};
+
+static int pin_memory_init(void)
+{
+ int err = misc_register(&pin_memory_miscdev);
+
+ if (!err)
+ pr_info("pin_memory init\n");
+ else
+ pr_warn("pin_memory init failed!\n");
+ return err;
+}
+
+static void pin_memory_exit(void)
+{
+ misc_deregister(&pin_memory_miscdev);
+ pr_info("pin_memory ko exists!\n");
+}
+
+module_init(pin_memory_init);
+module_exit(pin_memory_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Euler");
+MODULE_DESCRIPTION("pin memory");
diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h
index fc0ef33..30f0df3 100644
--- a/include/linux/crash_core.h
+++ b/include/linux/crash_core.h
@@ -87,4 +87,9 @@ int parse_crashkernel_high(char *cmdline, unsigned long long system_ram,
int parse_crashkernel_low(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
+#ifdef CONFIG_PIN_MEMORY
+int __init parse_pin_memory(char *cmdline, unsigned long long system_ram,
+ unsigned long long *pin_size, unsigned long long *pin_base);
+#endif
+
#endif /* LINUX_CRASH_CORE_H */
diff --git a/include/linux/pin_mem.h b/include/linux/pin_mem.h
new file mode 100644
index 0000000..bc8b03e
--- /dev/null
+++ b/include/linux/pin_mem.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved.
+ * Provide the pin memory method for checkpoint and restore task.
+ */
+#ifndef _LINUX_PIN_MEMORY_H
+#define _LINUX_PIN_MEMORY_H
+
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/errno.h>
+#include <linux/mm_types.h>
+#include <linux/err.h>
+#ifdef CONFIG_ARM64
+#include <linux/ioport.h>
+#endif
+
+#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)
+
+#define COLLECT_PAGES_FINISH 0
+#define COLLECT_PAGES_NEED_CONTINUE 1
+#define COLLECT_PAGES_FAIL -1
+
+#define COMPOUND_PAD_MASK 0xffffffff
+#define COMPOUND_PAD_START 0x88
+#define COMPOUND_PAD_DELTA 0x40
+#define LIST_POISON4 0xdead000000000400
+#define PAGE_FLAGS_CHECK_RESERVED (1UL << PG_reserved)
+#define SHA256_DIGEST_SIZE 32
+#define next_pme(pme) ((unsigned long *)(pme + 1) + pme->nr_pages)
+#define PIN_MEM_DUMP_MAGIC 0xfeab000000001acd
+struct page_map_entry {
+ unsigned long virt_addr;
+ unsigned int nr_pages;
+ unsigned int is_huge_page;
+ unsigned long redirect_start;
+ unsigned long phy_addr_array[0];
+};
+
+struct page_map_info {
+ int pid;
+ int pid_reserved;
+ unsigned int entry_num;
+ int disable_free_page;
+ struct page_map_entry *pme;
+};
+
+struct pin_mem_dump_info {
+ char sha_digest[SHA256_DIGEST_SIZE];
+ unsigned long magic;
+ unsigned int pin_pid_num;
+ struct page_map_info pmi_array[0];
+};
+
+struct redirect_info {
+ unsigned int redirect_pages;
+ unsigned int redirect_index[0];
+};
+
+extern struct page_map_info *get_page_map_info(int pid);
+extern struct page_map_info *create_page_map_info(int pid);
+extern vm_fault_t do_mem_remap(int pid, struct mm_struct *mm);
+extern vm_fault_t do_anon_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page);
+extern void clear_pin_memory_record(void);
+extern int pin_mem_area(struct task_struct *task, struct mm_struct *mm,
+ unsigned long start_addr, unsigned long end_addr);
+extern vm_fault_t do_anon_huge_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page);
+extern int finish_pin_mem_dump(void);
+
+/* reserve space for pin memory*/
+#ifdef CONFIG_ARM64
+extern struct resource pin_memory_resource;
+#endif
+extern void init_reserve_page_map(unsigned long map_addr, unsigned long map_size);
+
+#endif /* CONFIG_PIN_MEMORY */
+#endif /* _LINUX_PIN_MEMORY_H */
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index bfed474..2407de3 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -450,6 +450,17 @@ void __init reserve_crashkernel(void)
}
#endif /* CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL */
+#ifdef CONFIG_PIN_MEMORY
+int __init parse_pin_memory(char *cmdline,
+ unsigned long long system_ram,
+ unsigned long long *pin_size,
+ unsigned long long *pin_base)
+{
+ return __parse_crashkernel(cmdline, system_ram, pin_size, pin_base,
+ "pinmemory=", NULL);
+}
+#endif
+
Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,
void *data, size_t data_len)
{
diff --git a/mm/Kconfig b/mm/Kconfig
index 390165f..930dc13 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -859,4 +859,12 @@ config ARCH_HAS_HUGEPD
config MAPPING_DIRTY_HELPERS
bool
+config PIN_MEMORY
+ bool "Support for pin memory"
+ depends on CHECKPOINT_RESTORE
+ help
+ Say y here to enable the pin memory feature for checkpoint
+ and restore. We can pin the memory data of tasks and collect
+ the corresponding physical pages mapping info in checkpoint,
+ and remap the physical pages to restore tasks in restore.
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index d73aed0..4963827 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_PIN_MEMORY) += pin_mem.o
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0bc4a2c..8a11d30 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2996,3 +2996,64 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
update_mmu_cache_pmd(vma, address, pvmw->pmd);
}
#endif
+
+#ifdef CONFIG_PIN_MEMORY
+vm_fault_t do_anon_huge_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page)
+{
+ gfp_t gfp;
+ pgtable_t pgtable;
+ spinlock_t *ptl;
+ pmd_t entry;
+ vm_fault_t ret = 0;
+
+ if (unlikely(anon_vma_prepare(vma)))
+ return VM_FAULT_OOM;
+ if (unlikely(khugepaged_enter(vma, vma->vm_flags)))
+ return VM_FAULT_OOM;
+ gfp = alloc_hugepage_direct_gfpmask(vma);
+ prep_transhuge_page(page);
+ if (mem_cgroup_charge(page, vma->vm_mm, gfp)) {
+ put_page(page);
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_vm_event(THP_FAULT_FALLBACK_CHARGE);
+ return VM_FAULT_FALLBACK;
+ }
+ cgroup_throttle_swaprate(page, gfp);
+
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (unlikely(!pgtable)) {
+ ret = VM_FAULT_OOM;
+ goto release;
+ }
+ __SetPageUptodate(page);
+ ptl = pmd_lock(vma->vm_mm, pmd);
+ if (unlikely(!pmd_none(*pmd))) {
+ goto unlock_release;
+ } else {
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto unlock_release;
+ entry = mk_huge_pmd(page, vma->vm_page_prot);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ page_add_new_anon_rmap(page, vma, address, true);
+ lru_cache_add_inactive_or_unevictable(page, vma);
+ pgtable_trans_huge_deposit(vma->vm_mm, pmd, pgtable);
+ set_pmd_at(vma->vm_mm, address, pmd, entry);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm_inc_nr_ptes(vma->vm_mm);
+ spin_unlock(ptl);
+ count_vm_event(THP_FAULT_ALLOC);
+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+ }
+
+ return 0;
+unlock_release:
+ spin_unlock(ptl);
+release:
+ if (pgtable)
+ pte_free(vma->vm_mm, pgtable);
+ put_page(page);
+ return ret;
+}
+#endif
diff --git a/mm/memory.c b/mm/memory.c
index 50632c4..7b7f1a7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5248,3 +5248,62 @@ void ptlock_free(struct page *page)
kmem_cache_free(page_ptl_cachep, page->ptl);
}
#endif
+
+#ifdef CONFIG_PIN_MEMORY
+vm_fault_t do_anon_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page)
+{
+ pte_t entry;
+ spinlock_t *ptl;
+ pte_t *pte;
+ vm_fault_t ret = 0;
+
+ if (pte_alloc(vma->vm_mm, pmd))
+ return VM_FAULT_OOM;
+
+ /* See the comment in pte_alloc_one_map() */
+ if (unlikely(pmd_trans_unstable(pmd)))
+ return 0;
+
+ /* Allocate our own private page. */
+ if (unlikely(anon_vma_prepare(vma)))
+ goto oom;
+
+ if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL))
+ goto oom_free_page;
+ cgroup_throttle_swaprate(page, GFP_KERNEL);
+
+ __SetPageUptodate(page);
+
+ entry = mk_pte(page, vma->vm_page_prot);
+ if (vma->vm_flags & VM_WRITE)
+ entry = pte_mkwrite(pte_mkdirty(entry));
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, address,
+ &ptl);
+ if (!pte_none(*pte)) {
+ ret = VM_FAULT_FALLBACK;
+ goto release;
+ }
+
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto release;
+ inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
+ page_add_new_anon_rmap(page, vma, address, false);
+ lru_cache_add_inactive_or_unevictable(page, vma);
+
+ set_pte_at(vma->vm_mm, address, pte, entry);
+ /* No need to invalidate - it was non-present before */
+ update_mmu_cache(vma, address, pte);
+unlock:
+ pte_unmap_unlock(pte, ptl);
+ return ret;
+release:
+ put_page(page);
+ goto unlock;
+oom_free_page:
+ put_page(page);
+oom:
+ return VM_FAULT_OOM;
+}
+#endif
diff --git a/mm/pin_mem.c b/mm/pin_mem.c
new file mode 100644
index 0000000..0a143b6
--- /dev/null
+++ b/mm/pin_mem.c
@@ -0,0 +1,950 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved.
+ * Provide the pin memory method for checkpoint and restore task.
+ */
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+#include <linux/mm.h>
+#include <linux/pin_mem.h>
+#include <linux/idr.h>
+#include <linux/page-isolation.h>
+#include <linux/sched/mm.h>
+#include <linux/ctype.h>
+#include <linux/highmem.h>
+#include <crypto/sha.h>
+
+#define MAX_PIN_PID_NUM 128
+static DEFINE_SPINLOCK(page_map_entry_lock);
+
+struct pin_mem_dump_info *pin_mem_dump_start;
+unsigned int pin_pid_num;
+static unsigned int *pin_pid_num_addr;
+static unsigned long __page_map_entry_start;
+static unsigned long page_map_entry_end;
+static struct page_map_info *user_space_reserve_start;
+static struct page_map_entry *page_map_entry_start;
+unsigned int max_pin_pid_num __read_mostly;
+unsigned long redirect_space_size;
+unsigned long redirect_space_start;
+#define DEFAULT_REDIRECT_SPACE_SIZE 0x100000
+
+static int __init setup_max_pin_pid_num(char *str)
+{
+ int ret = 0;
+
+ if (!str)
+ goto out;
+
+ ret = kstrtouint(str, 10, &max_pin_pid_num);
+out:
+ if (ret) {
+ pr_warn("Unable to parse max pin pid num.\n");
+ } else {
+ if (max_pin_pid_num > MAX_PIN_PID_NUM) {
+ max_pin_pid_num = 0;
+ pr_warn("Input max_pin_pid_num is too large.\n");
+ }
+ }
+ return ret;
+}
+early_param("max_pin_pid_num", setup_max_pin_pid_num);
+
+static int __init setup_redirect_space_size(char *str)
+{
+ if (!str)
+ goto out;
+
+ redirect_space_size = memparse(str, NULL);
+out:
+ if (!redirect_space_size) {
+ pr_warn("Unable to parse redirect space size, use the default value.\n");
+ redirect_space_size = DEFAULT_REDIRECT_SPACE_SIZE;
+ }
+ return 0;
+}
+early_param("redirect_space_size", setup_redirect_space_size);
+
+struct page_map_info *create_page_map_info(int pid)
+{
+ struct page_map_info *new;
+
+ if (!user_space_reserve_start)
+ return NULL;
+
+ if (pin_pid_num >= max_pin_pid_num) {
+ pr_warn("Pin pid num too large than max_pin_pid_num, fail create: %d!", pid);
+ return NULL;
+ }
+ new = (struct page_map_info *)(user_space_reserve_start + pin_pid_num);
+ new->pid = pid;
+ new->pme = NULL;
+ new->entry_num = 0;
+ new->pid_reserved = false;
+ new->disable_free_page = false;
+ (*pin_pid_num_addr)++;
+ pin_pid_num++;
+ return new;
+}
+EXPORT_SYMBOL_GPL(create_page_map_info);
+
+struct page_map_info *get_page_map_info(int pid)
+{
+ int i;
+
+ if (!user_space_reserve_start)
+ return NULL;
+
+ for (i = 0; i < pin_pid_num; i++) {
+ if (user_space_reserve_start[i].pid == pid)
+ return &(user_space_reserve_start[i]);
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(get_page_map_info);
+
+static struct page *find_head_page(struct page *page)
+{
+ struct page *p = page;
+
+ while (!PageBuddy(p)) {
+ if (PageLRU(p))
+ return NULL;
+ p--;
+ }
+ return p;
+}
+
+static void spilt_page_area_left(struct zone *zone, struct free_area *area, struct page *page,
+ unsigned long size, int order)
+{
+ unsigned long cur_size = 1 << order;
+ unsigned long total_size = 0;
+
+ while (size && cur_size > size) {
+ cur_size >>= 1;
+ order--;
+ area--;
+ if (cur_size <= size) {
+ list_add(&page[total_size].lru, &area->free_list[MIGRATE_MOVABLE]);
+ atomic_set(&(page[total_size]._mapcount), PAGE_BUDDY_MAPCOUNT_VALUE);
+ set_page_private(&page[total_size], order);
+ set_pageblock_migratetype(&page[total_size], MIGRATE_MOVABLE);
+ area->nr_free++;
+ total_size += cur_size;
+ size -= cur_size;
+ }
+ }
+}
+
+static void spilt_page_area_right(struct zone *zone, struct free_area *area, struct page *page,
+ unsigned long size, int order)
+{
+ unsigned long cur_size = 1 << order;
+ struct page *right_page, *head_page;
+
+ right_page = page + size;
+ while (size && cur_size > size) {
+ cur_size >>= 1;
+ order--;
+ area--;
+ if (cur_size <= size) {
+ head_page = right_page - cur_size;
+ list_add(&head_page->lru, &area->free_list[MIGRATE_MOVABLE]);
+ atomic_set(&(head_page->_mapcount), PAGE_BUDDY_MAPCOUNT_VALUE);
+ set_page_private(head_page, order);
+ set_pageblock_migratetype(head_page, MIGRATE_MOVABLE);
+ area->nr_free++;
+ size -= cur_size;
+ right_page = head_page;
+ }
+ }
+}
+
+void reserve_page_from_buddy(unsigned long nr_pages, struct page *page)
+{
+ unsigned int current_order;
+ struct page *page_end;
+ struct free_area *area;
+ struct zone *zone;
+ struct page *head_page;
+
+ head_page = find_head_page(page);
+ if (!head_page) {
+ pr_warn("Find page head fail.");
+ return;
+ }
+ current_order = head_page->private;
+ page_end = head_page + (1 << current_order);
+ zone = page_zone(head_page);
+ area = &(zone->free_area[current_order]);
+ list_del(&head_page->lru);
+ atomic_set(&head_page->_mapcount, -1);
+ set_page_private(head_page, 0);
+ area->nr_free--;
+ if (head_page != page)
+ spilt_page_area_left(zone, area, head_page,
+ (unsigned long)(page - head_page), current_order);
+ page = page + nr_pages;
+ if (page < page_end) {
+ spilt_page_area_right(zone, area, page,
+ (unsigned long)(page_end - page), current_order);
+ } else if (page > page_end) {
+ pr_warn("Find page end smaller than page.");
+ }
+}
+
+static inline void reserve_user_normal_pages(struct page *page)
+{
+ atomic_inc(&page->_refcount);
+ reserve_page_from_buddy(1, page);
+}
+
+static void init_huge_pmd_pages(struct page *head_page)
+{
+ int i = 0;
+ struct page *page = head_page;
+
+ __set_bit(PG_head, &page->flags);
+ __set_bit(PG_active, &page->flags);
+ atomic_set(&page->_refcount, 1);
+ page++;
+ i++;
+ page->compound_head = (unsigned long)head_page + 1;
+ page->compound_dtor = HUGETLB_PAGE_DTOR + 1;
+ page->compound_order = HPAGE_PMD_ORDER;
+ page++;
+ i++;
+ page->compound_head = (unsigned long)head_page + 1;
+ i++;
+ INIT_LIST_HEAD(&(page->deferred_list));
+ for (; i < HPAGE_PMD_NR; i++) {
+ page = head_page + i;
+ page->compound_head = (unsigned long)head_page + 1;
+ }
+}
+
+static inline void reserve_user_huge_pmd_pages(struct page *page)
+{
+ atomic_inc(&page->_refcount);
+ reserve_page_from_buddy((1 << HPAGE_PMD_ORDER), page);
+ init_huge_pmd_pages(page);
+}
+
+int reserve_user_map_pages_fail;
+
+void free_user_map_pages(unsigned int pid_index, unsigned int entry_index, unsigned int page_index)
+{
+ unsigned int i, j, index, order;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ struct page *page;
+ unsigned long phy_addr;
+
+ for (index = 0; index < pid_index; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ }
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < entry_index; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ for (j = 0; j < page_index; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+}
+
+bool check_redirect_end_valid(struct redirect_info *redirect_start,
+ unsigned long max_redirect_page_num)
+{
+ unsigned long redirect_end;
+
+ redirect_end = ((unsigned long)(redirect_start + 1) +
+ max_redirect_page_num * sizeof(unsigned int));
+ if (redirect_end > redirect_space_start + redirect_space_size)
+ return false;
+ return false;
+}
+
+static void reserve_user_space_map_pages(void)
+{
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ unsigned int i, j, index;
+ struct page *page;
+ unsigned long flags;
+ unsigned long phy_addr;
+ unsigned long redirect_pages = 0;
+ struct redirect_info *redirect_start = (struct redirect_info *)redirect_space_start;
+
+ if (!user_space_reserve_start || !redirect_start)
+ return;
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ redirect_pages = 0;
+ if (!check_redirect_end_valid(redirect_start, pme->nr_pages))
+ redirect_start = NULL;
+ for (j = 0; j < pme->nr_pages; j++) {
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (atomic_read(&page->_refcount)) {
+ if ((page->flags & PAGE_FLAGS_CHECK_RESERVED)
+ && !pme->redirect_start)
+ pme->redirect_start =
+ (unsigned long)redirect_start;
+ if (redirect_start &&
+ (page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ redirect_start->redirect_index[redirect_pages] = j;
+ redirect_pages++;
+ continue;
+ } else {
+ reserve_user_map_pages_fail = 1;
+ pr_warn("Page %pK refcount %d large than zero, no need reserve.\n",
+ page, atomic_read(&page->_refcount));
+ goto free_pages;
+ }
+ }
+ if (!pme->is_huge_page)
+ reserve_user_normal_pages(page);
+ else
+ reserve_user_huge_pmd_pages(page);
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ if (redirect_pages && redirect_start) {
+ redirect_start->redirect_pages = redirect_pages;
+ redirect_start = (struct redirect_info *)(
+ (unsigned long)(redirect_start + 1) +
+ redirect_start->redirect_pages * sizeof(unsigned int));
+ }
+ }
+ }
+ spin_unlock(&page_map_entry_lock);
+ return;
+free_pages:
+ free_user_map_pages(index, i, j);
+ spin_unlock(&page_map_entry_lock);
+}
+
+
+int calculate_pin_mem_digest(struct pin_mem_dump_info *pmdi, char *digest)
+{
+ int i;
+ struct sha256_state sctx;
+
+ if (!digest)
+ digest = pmdi->sha_digest;
+ sha256_init(&sctx);
+ sha256_update(&sctx, (unsigned char *)(&(pmdi->magic)),
+ sizeof(struct pin_mem_dump_info) - SHA256_DIGEST_SIZE);
+ for (i = 0; i < pmdi->pin_pid_num; i++) {
+ sha256_update(&sctx, (unsigned char *)(&(pmdi->pmi_array[i])),
+ sizeof(struct page_map_info));
+ }
+ sha256_final(&sctx, digest);
+ return 0;
+}
+
+static int check_sha_digest(struct pin_mem_dump_info *pmdi)
+{
+ int ret = 0;
+ char digest[SHA256_DIGEST_SIZE] = {0};
+
+ ret = calculate_pin_mem_digest(pmdi, digest);
+ if (ret) {
+ pr_warn("calculate pin mem digest fail:%d\n", ret);
+ return ret;
+ }
+ if (memcmp(pmdi->sha_digest, digest, SHA256_DIGEST_SIZE)) {
+ pr_warn("pin mem dump info sha256 digest match error!\n");
+ return -EFAULT;
+ }
+ return ret;
+}
+
+/*
+ * The whole page map entry collect process must be Sequentially.
+ * The user_space_reserve_start points to the first page map info for
+ * the first dump task. And the page_map_entry_start points to
+ * the first page map entry of the first dump vma.
+ */
+static void init_page_map_info(struct pin_mem_dump_info *pmdi, unsigned long map_len)
+{
+ if (pin_mem_dump_start || !max_pin_pid_num) {
+ pr_warn("pin page map already init or max_pin_pid_num not set.\n");
+ return;
+ }
+ if (map_len < sizeof(struct pin_mem_dump_info) +
+ max_pin_pid_num * sizeof(struct page_map_info) + redirect_space_size) {
+ pr_warn("pin memory reserved memblock too small.\n");
+ return;
+ }
+ if ((pmdi->magic != PIN_MEM_DUMP_MAGIC) || (pmdi->pin_pid_num > max_pin_pid_num) ||
+ check_sha_digest(pmdi))
+ memset(pmdi, 0, sizeof(struct pin_mem_dump_info));
+ pin_mem_dump_start = pmdi;
+ pin_pid_num = pmdi->pin_pid_num;
+ pr_info("pin_pid_num: %d\n", pin_pid_num);
+ pin_pid_num_addr = &(pmdi->pin_pid_num);
+ user_space_reserve_start =
+ (struct page_map_info *)pmdi->pmi_array;
+ page_map_entry_start =
+ (struct page_map_entry *)(user_space_reserve_start + max_pin_pid_num);
+ page_map_entry_end = (unsigned long)pmdi + map_len - redirect_space_size;
+ redirect_space_start = page_map_entry_end;
+ if (pin_pid_num > 0)
+ reserve_user_space_map_pages();
+}
+
+int finish_pin_mem_dump(void)
+{
+ int ret;
+
+ pin_mem_dump_start->magic = PIN_MEM_DUMP_MAGIC;
+ memset(pin_mem_dump_start->sha_digest, 0, SHA256_DIGEST_SIZE);
+ ret = calculate_pin_mem_digest(pin_mem_dump_start, NULL);
+ if (ret) {
+ pr_warn("calculate pin mem digest fail:%d\n", ret);
+ return ret;
+ }
+ return ret;
+}
+
+int collect_pmd_huge_pages(struct task_struct *task,
+ unsigned long start_addr, unsigned long end_addr, struct page_map_entry *pme)
+{
+ long res;
+ int index = 0;
+ unsigned long start = start_addr;
+ struct page *temp_page;
+
+ while (start < end_addr) {
+ temp_page = NULL;
+ res = get_user_pages_remote(task->mm, start, 1,
+ FOLL_TOUCH | FOLL_GET, &temp_page, NULL, NULL);
+ if (!res) {
+ pr_warn("Get huge page for addr(%lx) fail.", start);
+ return COLLECT_PAGES_FAIL;
+ }
+ if (PageHead(temp_page)) {
+ start += HPAGE_PMD_SIZE;
+ pme->phy_addr_array[index] = page_to_phys(temp_page);
+ index++;
+ } else {
+ pme->nr_pages = index;
+ atomic_dec(&((temp_page)->_refcount));
+ return COLLECT_PAGES_NEED_CONTINUE;
+ }
+ }
+ pme->nr_pages = index;
+ return COLLECT_PAGES_FINISH;
+}
+
+int collect_normal_pages(struct task_struct *task,
+ unsigned long start_addr, unsigned long end_addr, struct page_map_entry *pme)
+{
+ int res;
+ unsigned long next;
+ unsigned long i, nr_pages;
+ struct page *tmp_page;
+ unsigned long *phy_addr_array = pme->phy_addr_array;
+ struct page **page_array = (struct page **)pme->phy_addr_array;
+
+ next = (start_addr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
+ next = (next > end_addr) ? end_addr : next;
+ pme->nr_pages = 0;
+ while (start_addr < next) {
+ nr_pages = (PAGE_ALIGN(next) - start_addr) / PAGE_SIZE;
+ res = get_user_pages_remote(task->mm, start_addr, 1,
+ FOLL_TOUCH | FOLL_GET, &tmp_page, NULL, NULL);
+ if (!res) {
+ pr_warn("Get user page of %lx fail.\n", start_addr);
+ return COLLECT_PAGES_FAIL;
+ }
+ if (PageHead(tmp_page)) {
+ atomic_dec(&(tmp_page->_refcount));
+ return COLLECT_PAGES_NEED_CONTINUE;
+ }
+ atomic_dec(&(tmp_page->_refcount));
+ if (PageTail(tmp_page)) {
+ start_addr = next;
+ pme->virt_addr = start_addr;
+ next = (next + HPAGE_PMD_SIZE) > end_addr ?
+ end_addr : (next + HPAGE_PMD_SIZE);
+ continue;
+ }
+ res = get_user_pages_remote(task->mm, start_addr, nr_pages,
+ FOLL_TOUCH | FOLL_GET, page_array, NULL, NULL);
+ if (!res) {
+ pr_warn("Get user pages of %lx fail.\n", start_addr);
+ return COLLECT_PAGES_FAIL;
+ }
+ for (i = 0; i < nr_pages; i++)
+ phy_addr_array[i] = page_to_phys(page_array[i]);
+ pme->nr_pages += nr_pages;
+ page_array += nr_pages;
+ phy_addr_array += nr_pages;
+ start_addr = next;
+ next = (next + HPAGE_PMD_SIZE) > end_addr ? end_addr : (next + HPAGE_PMD_SIZE);
+ }
+ return COLLECT_PAGES_FINISH;
+}
+
+/* Users make sure that the pin memory belongs to anonymous vma. */
+int pin_mem_area(struct task_struct *task, struct mm_struct *mm,
+ unsigned long start_addr, unsigned long end_addr)
+{
+ int pid, ret;
+ int is_huge_page = false;
+ unsigned int page_size;
+ unsigned long nr_pages, flags;
+ struct page_map_entry *pme;
+ struct page_map_info *pmi;
+ struct vm_area_struct *vma;
+ unsigned long i;
+ struct page *tmp_page;
+
+ if (!page_map_entry_start
+ || !task || !mm
+ || start_addr >= end_addr)
+ return -EFAULT;
+
+ pid = task->pid;
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ nr_pages = ((end_addr - start_addr) / PAGE_SIZE);
+ if ((unsigned long)page_map_entry_start + nr_pages * sizeof(struct page *) >=
+ page_map_entry_end) {
+ pr_warn("Page map entry use up!\n");
+ ret = -EFAULT;
+ goto finish;
+ }
+ vma = find_extend_vma(mm, start_addr);
+ if (!vma) {
+ pr_warn("Find no match vma!\n");
+ ret = -EFAULT;
+ goto finish;
+ }
+ if (start_addr == (start_addr & HPAGE_PMD_MASK) &&
+ transparent_hugepage_enabled(vma)) {
+ page_size = HPAGE_PMD_SIZE;
+ is_huge_page = true;
+ } else {
+ page_size = PAGE_SIZE;
+ }
+ pme = page_map_entry_start;
+ pme->virt_addr = start_addr;
+ pme->redirect_start = 0;
+ pme->is_huge_page = is_huge_page;
+ memset(pme->phy_addr_array, 0, nr_pages * sizeof(unsigned long));
+ down_write(&mm->mmap_lock);
+ if (!is_huge_page) {
+ ret = collect_normal_pages(task, start_addr, end_addr, pme);
+ if (ret != COLLECT_PAGES_FAIL && !pme->nr_pages) {
+ if (ret == COLLECT_PAGES_FINISH) {
+ ret = 0;
+ up_write(&mm->mmap_lock);
+ goto finish;
+ }
+ pme->is_huge_page = true;
+ page_size = HPAGE_PMD_SIZE;
+ ret = collect_pmd_huge_pages(task, pme->virt_addr, end_addr, pme);
+ }
+ } else {
+ ret = collect_pmd_huge_pages(task, start_addr, end_addr, pme);
+ if (ret != COLLECT_PAGES_FAIL && !pme->nr_pages) {
+ if (ret == COLLECT_PAGES_FINISH) {
+ ret = 0;
+ up_write(&mm->mmap_lock);
+ goto finish;
+ }
+ pme->is_huge_page = false;
+ page_size = PAGE_SIZE;
+ ret = collect_normal_pages(task, pme->virt_addr, end_addr, pme);
+ }
+ }
+ up_write(&mm->mmap_lock);
+ if (ret == COLLECT_PAGES_FAIL) {
+ ret = -EFAULT;
+ goto finish;
+ }
+
+ /* check for zero pages */
+ for (i = 0; i < pme->nr_pages; i++) {
+ tmp_page = phys_to_page(pme->phy_addr_array[i]);
+ if (!pme->is_huge_page) {
+ if (page_to_pfn(tmp_page) == my_zero_pfn(pme->virt_addr + i * PAGE_SIZE))
+ pme->phy_addr_array[i] = 0;
+ } else if (is_huge_zero_page(tmp_page))
+ pme->phy_addr_array[i] = 0;
+ }
+
+ page_map_entry_start = (struct page_map_entry *)(next_pme(pme));
+ pmi = get_page_map_info(pid);
+ if (!pmi)
+ pmi = create_page_map_info(pid);
+ if (!pmi) {
+ pr_warn("Create page map info fail for pid: %d!\n", pid);
+ ret = -EFAULT;
+ goto finish;
+ }
+ if (!pmi->pme)
+ pmi->pme = pme;
+ pmi->entry_num++;
+ spin_unlock_irqrestore(&page_map_entry_lock, flags);
+ if (ret == COLLECT_PAGES_NEED_CONTINUE)
+ ret = pin_mem_area(task, mm, pme->virt_addr + pme->nr_pages * page_size, end_addr);
+ return ret;
+finish:
+ spin_unlock_irqrestore(&page_map_entry_lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(pin_mem_area);
+
+vm_fault_t remap_normal_pages(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page_map_entry *pme)
+{
+ int ret;
+ unsigned int j, i;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pmd_t *pmd;
+ pud_t *pud;
+ struct page *page, *new;
+ unsigned long address;
+ unsigned long phy_addr;
+ unsigned int redirect_pages = 0;
+ struct redirect_info *redirect_start;
+
+ redirect_start = (struct redirect_info *)pme->redirect_start;
+ for (j = 0; j < pme->nr_pages; j++) {
+ address = pme->virt_addr + j * PAGE_SIZE;
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (page_to_pfn(page) == my_zero_pfn(address)) {
+ pme->phy_addr_array[j] = 0;
+ continue;
+ }
+ pme->phy_addr_array[j] = 0;
+ if (redirect_start && (redirect_pages < redirect_start->redirect_pages) &&
+ (j == redirect_start->redirect_index[redirect_pages])) {
+ new = alloc_zeroed_user_highpage_movable(vma, address);
+ if (!new) {
+ pr_warn("Redirect alloc page fail\n");
+ continue;
+ }
+ copy_page(page_to_virt(new), phys_to_virt(phy_addr));
+ page = new;
+ redirect_pages++;
+ }
+ page->mapping = NULL;
+ pgd = pgd_offset(mm, address);
+ p4d = p4d_alloc(mm, pgd, address);
+ if (!p4d) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pud = pud_alloc(mm, p4d, address);
+ if (!pud) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pmd = pmd_alloc(mm, pud, address);
+ if (!pmd) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ ret = do_anon_page_remap(vma, address, pmd, page);
+ if (ret)
+ goto free;
+ }
+ return 0;
+free:
+ for (i = j; i < pme->nr_pages; i++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ __free_page(phys_to_page(phy_addr));
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ return ret;
+}
+
+static inline gfp_t get_hugepage_gfpmask(struct vm_area_struct *vma)
+{
+ const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
+
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ __GFP_KSWAPD_RECLAIM);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ 0);
+ return GFP_TRANSHUGE_LIGHT;
+}
+
+vm_fault_t remap_huge_pmd_pages(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page_map_entry *pme)
+{
+ int ret;
+ unsigned int j, i;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pmd_t *pmd;
+ pud_t *pud;
+ gfp_t gfp;
+ struct page *page, *new;
+ unsigned long address;
+ unsigned long phy_addr;
+ unsigned int redirect_pages = 0;
+ struct redirect_info *redirect_start;
+
+ redirect_start = (struct redirect_info *)pme->redirect_start;
+ for (j = 0; j < pme->nr_pages; j++) {
+ address = pme->virt_addr + j * HPAGE_PMD_SIZE;
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (is_huge_zero_page(page)) {
+ pme->phy_addr_array[j] = 0;
+ continue;
+ }
+ pme->phy_addr_array[j] = 0;
+ if (redirect_start && (redirect_pages < redirect_start->redirect_pages) &&
+ (j == redirect_start->redirect_index[redirect_pages])) {
+ gfp = get_hugepage_gfpmask(vma);
+ new = alloc_hugepage_vma(gfp, vma, address, HPAGE_PMD_ORDER);
+ if (!new) {
+ pr_warn("Redirect alloc huge page fail\n");
+ continue;
+ }
+ memcpy(page_to_virt(new), phys_to_virt(phy_addr), HPAGE_PMD_SIZE);
+ page = new;
+ redirect_pages++;
+ }
+ pgd = pgd_offset(mm, address);
+ p4d = p4d_alloc(mm, pgd, address);
+ if (!p4d) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pud = pud_alloc(mm, p4d, address);
+ if (!pud) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pmd = pmd_alloc(mm, pud, address);
+ if (!pmd) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ ret = do_anon_huge_page_remap(vma, address, pmd, page);
+ if (ret)
+ goto free;
+ }
+ return 0;
+free:
+ for (i = j; i < pme->nr_pages; i++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, HPAGE_PMD_ORDER);
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ }
+ return ret;
+}
+
+static void free_unmap_pages(struct page_map_info *pmi,
+ struct page_map_entry *pme,
+ unsigned int index)
+{
+ unsigned int i, j;
+ unsigned long phy_addr;
+ unsigned int order;
+ struct page *page;
+
+ pme = (struct page_map_entry *)(next_pme(pme));
+ for (i = index; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)(next_pme(pme));
+ }
+}
+
+vm_fault_t do_mem_remap(int pid, struct mm_struct *mm)
+{
+ unsigned int i = 0;
+ vm_fault_t ret = 0;
+ struct vm_area_struct *vma;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ unsigned long flags;
+
+ if (reserve_user_map_pages_fail)
+ return -EFAULT;
+ pmi = get_page_map_info(pid);
+ if (!pmi)
+ return -EFAULT;
+
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ pmi->disable_free_page = true;
+ spin_unlock(&page_map_entry_lock);
+ down_write(&mm->mmap_lock);
+ pme = pmi->pme;
+ vma = mm->mmap;
+ while ((i < pmi->entry_num) && (vma != NULL)) {
+ if (pme->virt_addr >= vma->vm_start && pme->virt_addr < vma->vm_end) {
+ i++;
+ if (!vma_is_anonymous(vma)) {
+ pme = (struct page_map_entry *)(next_pme(pme));
+ continue;
+ }
+ if (!pme->is_huge_page) {
+ ret = remap_normal_pages(mm, vma, pme);
+ if (ret < 0)
+ goto free;
+ } else {
+ ret = remap_huge_pmd_pages(mm, vma, pme);
+ if (ret < 0)
+ goto free;
+ }
+ pme = (struct page_map_entry *)(next_pme(pme));
+ } else {
+ vma = vma->vm_next;
+ }
+ }
+ up_write(&mm->mmap_lock);
+ return 0;
+free:
+ free_unmap_pages(pmi, pme, i);
+ up_write(&mm->mmap_lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(do_mem_remap);
+
+#if defined(CONFIG_ARM64)
+void init_reserve_page_map(unsigned long map_addr, unsigned long map_size)
+{
+ void *addr;
+
+ if (!map_addr || !map_size)
+ return;
+ addr = phys_to_virt(map_addr);
+ init_page_map_info((struct pin_mem_dump_info *)addr, map_size);
+}
+#else
+void init_reserve_page_map(unsigned long map_addr, unsigned long map_size)
+{
+}
+#endif
+
+static void free_all_reserved_pages(void)
+{
+ unsigned int i, j, index, order;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ struct page *page;
+ unsigned long phy_addr;
+
+ if (!user_space_reserve_start || reserve_user_map_pages_fail)
+ return;
+
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ if (pmi->disable_free_page)
+ continue;
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ }
+}
+
+/* Clear all pin memory record. */
+void clear_pin_memory_record(void)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ free_all_reserved_pages();
+ if (pin_pid_num_addr) {
+ *pin_pid_num_addr = 0;
+ pin_pid_num = 0;
+ page_map_entry_start = (struct page_map_entry *)__page_map_entry_start;
+ }
+ spin_unlock(&page_map_entry_lock);
+}
+EXPORT_SYMBOL_GPL(clear_pin_memory_record);
+
+#endif /* CONFIG_PIN_MEMORY */
--
2.9.5
2
1
尊敬的linux内核组各位老师同学:
你们好!
我是北京拓林思软件有限公司的员工邓伟进。我对linux内核进程管理,文件系统和
设备驱动有一定理解。熟悉bpf。目前正努力致力于研究学习bpf和linux内核进程管
理,内存管理和文件系统。热爱操作系统和编程。严谨,并喜欢刨根究底的探究问
题。这次在陈琪德老师鼓励下申请加入linux内核组,还望通过!非常感谢,我会
争取能为社区做一些力所能及的工作。
期待您的回复!
邓伟进 2021.3.2
2
1
From: zhuling <zhuling8(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: NA
Enabled e820_pmem in arm64:
Use memmap=nn[KMG]!ss[KMG] reserve memory for persistent storage
when the kernel restart or update. the data in PMEM will not be lost
and can be loaded faster.this is a general features.
if you want use this features, you need do as follows:
1.reserve memory: add memmap to reserve memory in grub.cfg
memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000.
2.insmod nd_e820.ko: modprobe nd_e820.
3.check pmem device in /dev exp: /dev/pmem0.
Signed-off-by: zhuling <zhuling8(a)huawei.com>
---
arch/arm64/Kconfig | 24 ++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pmem.c | 35 ++++++++++++++
arch/arm64/kernel/setup.c | 6 +++
arch/arm64/mm/init.c | 98 ++++++++++++++++++++++++++++++++++++++
drivers/nvdimm/Makefile | 1 +
include/linux/mm.h | 4 ++
7 files changed, 169 insertions(+)
create mode 100644 arch/arm64/kernel/pmem.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b9c56543c..f1e05d9d2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1141,6 +1141,30 @@ config XEN_DOM0
def_bool y
depends on XEN
+config ARM64_PMEM_LEGACY_DEVICE
+ bool
+
+config ARM64_PMEM_RESERVE
+ bool "reserve memory for persistent storage"
+ default y
+ help
+ Use memmap=nn[KMG]!ss[KMG](memmap=100K!0x1a0000000) reserve memory for
+ persistent storage
+
+ Say y here to enable this feature
+
+config ARM64_PMEM_LEGACY
+ tristate "create persistent storage"
+ depends on ARM64_PMEM_RESERVE
+ depends on BLK_DEV
+ select ARM64_PMEM_LEGACY_DEVICE
+ select LIBNVDIMM
+ help
+ Use reserved memory for persistent storage when the kernel restart
+ or update. the data in PMEM will not be lost and can be loaded faster.
+
+ Say y if unsure.
+
config XEN
bool "Xen guest support on ARM64"
depends on ARM64 && OF
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2621d5c2b..c363639b8 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -63,6 +63,7 @@ obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
+obj-$(CONFIG_ARM64_PMEM_LEGACY_DEVICE) += pmem.o
obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
diff --git a/arch/arm64/kernel/pmem.c b/arch/arm64/kernel/pmem.c
new file mode 100644
index 000000000..16eaf706f
--- /dev/null
+++ b/arch/arm64/kernel/pmem.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2021 Huawei Technologies Co., Ltd
+ *
+ * Derived from x86 and arm64 implement PMEM.
+ */
+#include <linux/platform_device.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/module.h>
+
+static int found(struct resource *res, void *data)
+{
+ return 1;
+}
+
+static int __init register_e820_pmem(void)
+{
+ struct platform_device *pdev;
+ int rc;
+
+ rc = walk_iomem_res_desc(IORES_DESC_PERSISTENT_MEMORY_LEGACY,
+ IORESOURCE_MEM, 0, -1, NULL, found);
+ if (rc <= 0)
+ return 0;
+
+ /*
+ * See drivers/nvdimm/e820.c for the implementation, this is
+ * simply here to trigger the module to load on demand.
+ */
+ pdev = platform_device_alloc("e820_pmem", -1);
+
+ return platform_device_add(pdev);
+}
+device_initcall(register_e820_pmem);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 6aff30de8..7f506036d 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -255,6 +255,12 @@ static void __init request_standard_resources(void)
request_resource(res, &crashk_res);
#endif
}
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ if (pmem_res.end && pmem_res.start)
+ request_resource(&iomem_resource, &pmem_res);
+#endif
+
}
static int __init reserve_memblock_reserved_regions(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 794f992cb..e4dc19145 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -63,6 +63,18 @@ EXPORT_SYMBOL(memstart_addr);
phys_addr_t arm64_dma_phys_limit __ro_after_init;
phys_addr_t arm64_dma32_phys_limit __ro_after_init;
+static unsigned long long pmem_size, pmem_start;
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+struct resource pmem_res = {
+ .name = "Persistent Memory (legacy)",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_PERSISTENT_MEMORY_LEGACY
+};
+#endif
+
#ifndef CONFIG_KEXEC_CORE
static void __init reserve_crashkernel(void)
{
@@ -236,6 +248,88 @@ static void __init fdt_enforce_memory_region(void)
memblock_add(usable_rgns[1].base, usable_rgns[1].size);
}
+static int __init is_mem_valid(unsigned long long mem_size, unsigned long long mem_start)
+{
+ if (!memblock_is_region_memory(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region is not memory!\n");
+ return -EINVAL;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_size)) {
+ pr_warn("cannot reserve mem: region overlaps reserved memory!\n");
+ return -EINVAL;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve mem: base address is not 2MB aligned!\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init parse_memmap_one(char *p)
+{
+ char *oldp;
+ phys_addr_t start_at, mem_size;
+ int ret;
+
+ if (!p)
+ return -EINVAL;
+
+ oldp = p;
+ mem_size = memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ if (!mem_size)
+ return -EINVAL;
+
+ mem_size = PAGE_ALIGN(mem_size);
+
+ if (*p == '!') {
+ start_at = memparse(p+1, &p);
+
+ if (is_mem_valid(mem_size, start_at) != 0)
+ return -EINVAL;
+
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ start_at, start_at + mem_size, mem_size >> 20);
+ pmem_start = start_at;
+ pmem_size = mem_size;
+ } else
+ pr_info("Unrecognized memmap option, please check the parameter.\n");
+
+ return *p == '\0' ? 0 : -EINVAL;
+}
+
+static int __init parse_memmap_opt(char *str)
+{
+ while (str) {
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ parse_memmap_one(str);
+ str = k;
+ }
+
+ return 0;
+}
+early_param("memmap", parse_memmap_opt);
+
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+static void __init reserve_pmem(void)
+{
+ memblock_remove(pmem_start, pmem_size);
+ pr_info("pmem reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ pmem_start, pmem_start + pmem_size, pmem_size >> 20);
+ pmem_res.start = pmem_start;
+ pmem_res.end = pmem_start + pmem_size - 1;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -359,6 +453,10 @@ void __init arm64_memblock_init(void)
reserve_elfcorehdr();
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+ reserve_pmem();
+#endif
+
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
dma_contiguous_reserve(arm64_dma32_phys_limit);
diff --git a/drivers/nvdimm/Makefile b/drivers/nvdimm/Makefile
index 29203f3d3..b97760e9f 100644
--- a/drivers/nvdimm/Makefile
+++ b/drivers/nvdimm/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
obj-$(CONFIG_ND_BTT) += nd_btt.o
obj-$(CONFIG_ND_BLK) += nd_blk.o
obj-$(CONFIG_X86_PMEM_LEGACY) += nd_e820.o
+obj-$(CONFIG_ARM64_PMEM_LEGACY) += nd_e820.o
obj-$(CONFIG_OF_PMEM) += of_pmem.o
obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o nd_virtio.o
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cd5c31372..a5e50495e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -45,6 +45,10 @@ extern int sysctl_page_lock_unfairness;
void init_mm_internals(void);
+#ifdef CONFIG_ARM64_PMEM_RESERVE
+extern struct resource pmem_res;
+#endif
+
#ifndef CONFIG_NEED_MULTIPLE_NODES /* Don't use mapnrs, do it properly */
extern unsigned long max_mapnr;
--
2.19.1
2
1

[PATCH openEuler-21.03 2/2] pid: add pid reserve method for checkpoint and recover
by hejingxian 02 Mar '21
by hejingxian 02 Mar '21
02 Mar '21
From: Jingxian He <hejingxian(a)huawei.com>
Date: Mon, 1 Mar 2021 17:44:59 +0800
Subject: [PATCH openEuler-21.03 2/2] pid: add pid reserve method for checkpoint and recover
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
We record the pid of dump tasks in the reserved memory,
and reserve the pids before init task start.
In the recover process, free the reserved pids and realloc them for use.
Signed-off-by: Jingxian He <hejingxian(a)huawei.com>
Reviewed-by: Wenliang He <hewenliang4(a)huawei.com>
Reviewed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 1 +
include/linux/pin_mem.h | 6 ++++
kernel/pid.c | 10 +++++++
mm/Kconfig | 10 +++++++
mm/pin_mem.c | 51 ++++++++++++++++++++++++++++++++++
5 files changed, 78 insertions(+)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 76fda68..de6db02 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1037,6 +1037,7 @@ CONFIG_FRAME_VECTOR=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_PIN_MEMORY=y
+CONFIG_PID_RESERVE=y
# end of Memory Management options
CONFIG_NET=y
diff --git a/include/linux/pin_mem.h b/include/linux/pin_mem.h
index bc8b03e..a9fe2ef 100644
--- a/include/linux/pin_mem.h
+++ b/include/linux/pin_mem.h
@@ -74,5 +74,11 @@ extern struct resource pin_memory_resource;
#endif
extern void init_reserve_page_map(unsigned long map_addr, unsigned long map_size);
+#ifdef CONFIG_PID_RESERVE
+extern bool is_need_reserve_pids(void);
+extern void free_reserved_pid(struct idr *idr, int pid);
+extern void reserve_pids(struct idr *idr, int pid_max);
+#endif
+
#endif /* CONFIG_PIN_MEMORY */
#endif /* _LINUX_PIN_MEMORY_H */
diff --git a/kernel/pid.c b/kernel/pid.c
index 4856818..32ab9ef 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -44,6 +44,9 @@
#include <linux/idr.h>
#include <net/sock.h>
#include <uapi/linux/pidfd.h>
+#ifdef CONFIG_PID_RESERVE
+#include <linux/pin_mem.h>
+#endif
struct pid init_struct_pid = {
.count = REFCOUNT_INIT(1),
@@ -209,6 +212,9 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid,
spin_lock_irq(&pidmap_lock);
if (tid) {
+#ifdef CONFIG_PID_RESERVE
+ free_reserved_pid(&tmp->idr, tid);
+#endif
nr = idr_alloc(&tmp->idr, NULL, tid,
tid + 1, GFP_ATOMIC);
/*
@@ -621,6 +627,10 @@ void __init pid_idr_init(void)
init_pid_ns.pid_cachep = KMEM_CACHE(pid,
SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT);
+#ifdef CONFIG_PID_RESERVE
+ if (is_need_reserve_pids())
+ reserve_pids(&init_pid_ns.idr, pid_max);
+#endif
}
static struct file *__pidfd_fget(struct task_struct *task, int fd)
diff --git a/mm/Kconfig b/mm/Kconfig
index 930dc13..e27d2c6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -868,3 +868,13 @@ config PIN_MEMORY
the corresponding physical pages mapping info in checkpoint,
and remap the physical pages to restore tasks in restore.
endmenu
+
+config PID_RESERVE
+ bool "Support for reserve pid"
+ depends on PIN_MEMORY
+ help
+ Say y here to enable the pid reserved feature for checkpoint.
+ and restore.
+ We record the pid of dump task in the reserve memory,
+ and reserve the pids before init task start. In restore process,
+ free the reserved pids and realloc them for use.
diff --git a/mm/pin_mem.c b/mm/pin_mem.c
index 0a143b6..a040853 100644
--- a/mm/pin_mem.c
+++ b/mm/pin_mem.c
@@ -947,4 +947,55 @@ void clear_pin_memory_record(void)
}
EXPORT_SYMBOL_GPL(clear_pin_memory_record);
+#ifdef CONFIG_PID_RESERVE
+struct idr *reserve_idr;
+
+/* test if there exist pin memory tasks */
+bool is_need_reserve_pids(void)
+{
+ return (pin_pid_num > 0);
+}
+
+void free_reserved_pid(struct idr *idr, int pid)
+{
+ unsigned int index;
+ struct page_map_info *pmi;
+
+ if (!max_pin_pid_num || idr != reserve_idr)
+ return;
+
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ if (pmi->pid == pid && pmi->pid_reserved) {
+ idr_remove(idr, pid);
+ return;
+ }
+ }
+}
+
+/* reserve pids for check point tasks which pinned memory */
+void reserve_pids(struct idr *idr, int pid_max)
+{
+ int alloc_pid;
+ unsigned int index;
+ struct page_map_info *pmi;
+
+ if (!max_pin_pid_num)
+ return;
+ reserve_idr = idr;
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ pmi->pid_reserved = true;
+ alloc_pid = idr_alloc(idr, NULL, pmi->pid, pid_max, GFP_ATOMIC);
+ if (alloc_pid != pmi->pid) {
+ if (alloc_pid > 0)
+ idr_remove(idr, alloc_pid);
+ pr_warn("Reserve pid (%d) fail, real pid is %d.\n", alloc_pid, pmi->pid);
+ pmi->pid_reserved = false;
+ continue;
+ }
+ }
+}
+#endif /* CONFIG_PID_RESERVE */
+
#endif /* CONFIG_PIN_MEMORY */
--
2.9.5
1
0

[PATCH openEuler-21.03 1/2] mm: add pin memory method for checkpoint add restore
by hejingxian@huawei.com 02 Mar '21
by hejingxian@huawei.com 02 Mar '21
02 Mar '21
From: Jingxian He <hejingxian(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
We can use the checkpoint and restore in userspace(criu) method to
dump and restore tasks when updating the kernel.
Currently, criu needs dump all memory data of tasks to files.
When the memory size is very large(larger than 1G),
the cost time of the dumping data will be very long(more than 1 min).
By pin the memory data of tasks and collect the corresponding
physical pages mapping info in checkpoint process,
we can remap the physical pages to restore tasks after
upgrading the kernel. This pin memory method can
restore the task data within one second.
The pin memory area info is saved in the reserved memblock,
which can keep usable in the kernel update process.
The pin memory driver provides the following ioctl command for criu:
1) SET_PIN_MEM_AREA:
Set pin memory area, which can be remap to the restore task.
2) CLEAR_PIN_MEM_AREA:
Clear the pin memory area info,
which enable user reset the pin data.
3) REMAP_PIN_MEM_AREA:
Remap the pages of the pin memory to the restore task.
Signed-off-by: Jingxian He <hejingxian(a)huawei.com>
Reviewed-by: Wenliang He <hewenliang4(a)huawei.com>
Reviewed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +
arch/arm64/kernel/setup.c | 9 +
arch/arm64/mm/init.c | 60 +++
drivers/char/Kconfig | 6 +
drivers/char/Makefile | 1 +
drivers/char/pin_memory.c | 208 ++++++++
include/linux/crash_core.h | 5 +
include/linux/pin_mem.h | 78 +++
kernel/crash_core.c | 11 +
mm/Kconfig | 8 +
mm/Makefile | 1 +
mm/huge_memory.c | 61 +++
mm/memory.c | 59 ++
mm/pin_mem.c | 950 +++++++++++++++++++++++++++++++++
14 files changed, 1459 insertions(+)
create mode 100644 drivers/char/pin_memory.c
create mode 100644 include/linux/pin_mem.h
create mode 100644 mm/pin_mem.c
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index c5271e7..76fda68 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1036,6 +1036,7 @@ CONFIG_FRAME_VECTOR=y
# CONFIG_GUP_BENCHMARK is not set
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
+CONFIG_PIN_MEMORY=y
# end of Memory Management options
CONFIG_NET=y
@@ -3282,6 +3283,7 @@ CONFIG_TCG_TIS_ST33ZP24_SPI=y
# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
+CONFIG_PIN_MEMORY_DEV=m
#
# I2C support
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index c1f1fb9..5e282d3 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -50,6 +50,9 @@
#include <asm/efi.h>
#include <asm/xen/hypervisor.h>
#include <asm/mmu_context.h>
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/pin_mem.h>
+#endif
static int num_standard_resources;
static struct resource *standard_resources;
@@ -260,6 +263,12 @@ static void __init request_standard_resources(void)
quick_kexec_res.end <= res->end)
request_resource(res, &quick_kexec_res);
#endif
+#ifdef CONFIG_PIN_MEMORY
+ if (pin_memory_resource.end &&
+ pin_memory_resource.start >= res->start &&
+ pin_memory_resource.end <= res->end)
+ request_resource(res, &pin_memory_resource);
+#endif
}
}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f3e5a66..8ab5aac 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -42,6 +42,9 @@
#include <linux/sizes.h>
#include <asm/tlb.h>
#include <asm/alternative.h>
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/pin_mem.h>
+#endif
#define ARM64_ZONE_DMA_BITS 30
@@ -78,6 +81,55 @@ static void __init reserve_crashkernel(void)
*/
#define MAX_USABLE_RANGES 2
+#ifdef CONFIG_PIN_MEMORY
+struct resource pin_memory_resource = {
+ .name = "Pin memory",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_MEM,
+ .desc = IORES_DESC_RESERVED
+};
+
+static void __init reserve_pin_memory_res(void)
+{
+ unsigned long long mem_start, mem_len;
+ int ret;
+
+ ret = parse_pin_memory(boot_command_line, memblock_phys_mem_size(),
+ &mem_len, &mem_start);
+ if (ret || !mem_len)
+ return;
+
+ mem_len = PAGE_ALIGN(mem_len);
+
+ if (!memblock_is_region_memory(mem_start, mem_len)) {
+ pr_warn("cannot reserve for pin memory: region is not memory!\n");
+ return;
+ }
+
+ if (memblock_is_region_reserved(mem_start, mem_len)) {
+ pr_warn("cannot reserve for pin memory: region overlaps reserved memory!\n");
+ return;
+ }
+
+ if (!IS_ALIGNED(mem_start, SZ_2M)) {
+ pr_warn("cannot reserve for pin memory: base address is not 2MB aligned\n");
+ return;
+ }
+
+ memblock_reserve(mem_start, mem_len);
+ pr_debug("pin memory resource reserved: 0x%016llx - 0x%016llx (%lld MB)\n",
+ mem_start, mem_start + mem_len, mem_len >> 20);
+
+ pin_memory_resource.start = mem_start;
+ pin_memory_resource.end = mem_start + mem_len - 1;
+}
+#else
+static void __init reserve_pin_memory_res(void)
+{
+}
+#endif /* CONFIG_PIN_MEMORY */
+
#ifdef CONFIG_CRASH_DUMP
static int __init early_init_dt_scan_elfcorehdr(unsigned long node,
const char *uname, int depth, void *data)
@@ -455,6 +507,8 @@ void __init arm64_memblock_init(void)
reserve_park_mem();
#endif
+ reserve_pin_memory_res();
+
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
@@ -583,6 +637,12 @@ void __init mem_init(void)
/* this will put all unused low memory onto the freelists */
memblock_free_all();
+#ifdef CONFIG_PIN_MEMORY
+ /* pre alloc the pages for pin memory */
+ init_reserve_page_map((unsigned long)pin_memory_resource.start,
+ (unsigned long)(pin_memory_resource.end - pin_memory_resource.start + 1));
+#endif
+
mem_init_print_info(NULL);
/*
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index d229a2d..fbb94b8 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -496,3 +496,9 @@ config RANDOM_TRUST_BOOTLOADER
booloader is trustworthy so it will be added to the kernel's entropy
pool. Otherwise, say N here so it will be regarded as device input that
only mixes the entropy pool.
+
+config PIN_MEMORY_DEV
+ bool "/dev/pinmem character device"
+ default m
+ help
+ pin memory driver
diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index ffce287..71d76fd 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -47,3 +47,4 @@ obj-$(CONFIG_PS3_FLASH) += ps3flash.o
obj-$(CONFIG_XILLYBUS) += xillybus/
obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o
obj-$(CONFIG_ADI) += adi.o
+obj-$(CONFIG_PIN_MEMORY_DEV) += pin_memory.o
diff --git a/drivers/char/pin_memory.c b/drivers/char/pin_memory.c
new file mode 100644
index 0000000..f46e056
--- /dev/null
+++ b/drivers/char/pin_memory.c
@@ -0,0 +1,208 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright @ Huawei Technologies Co., Ltd. 2020-2020. ALL rights reserved.
+ * Description: Euler pin memory driver
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kprobes.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/miscdevice.h>
+#include <linux/fs.h>
+#include <linux/mm_types.h>
+#include <linux/processor.h>
+#include <uapi/asm-generic/ioctl.h>
+#include <uapi/asm-generic/mman-common.h>
+#include <uapi/asm/setup.h>
+#include <linux/pin_mem.h>
+#include <linux/sched/mm.h>
+
+#define MAX_PIN_MEM_AREA_NUM 16
+struct _pin_mem_area {
+ unsigned long virt_start;
+ unsigned long virt_end;
+};
+
+struct pin_mem_area_set {
+ unsigned int pid;
+ unsigned int area_num;
+ struct _pin_mem_area mem_area[MAX_PIN_MEM_AREA_NUM];
+};
+
+#define PIN_MEM_MAGIC 0x59
+#define _SET_PIN_MEM_AREA 1
+#define _CLEAR_PIN_MEM_AREA 2
+#define _REMAP_PIN_MEM_AREA 3
+#define _FINISH_PIN_MEM_DUMP 4
+#define _PIN_MEM_IOC_MAX_NR 4
+#define SET_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _SET_PIN_MEM_AREA, struct pin_mem_area_set)
+#define CLEAR_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _CLEAR_PIN_MEM_AREA, int)
+#define REMAP_PIN_MEM_AREA _IOW(PIN_MEM_MAGIC, _REMAP_PIN_MEM_AREA, int)
+#define FINISH_PIN_MEM_DUMP _IOW(PIN_MEM_MAGIC, _FINISH_PIN_MEM_DUMP, int)
+static int set_pin_mem(struct pin_mem_area_set *pmas)
+{
+ int i;
+ int ret = 0;
+ struct _pin_mem_area *pma;
+ struct mm_struct *mm;
+ struct task_struct *task;
+ struct pid *pid_s;
+
+ pid_s = find_get_pid(pmas->pid);
+ if (!pid_s) {
+ pr_warn("Get pid struct fail:%d.\n", pmas->pid);
+ return -EFAULT;
+ }
+ rcu_read_lock();
+ task = pid_task(pid_s, PIDTYPE_PID);
+ if (!task) {
+ pr_warn("Get task struct fail:%d.\n", pmas->pid);
+ goto fail;
+ }
+ mm = get_task_mm(task);
+ for (i = 0; i < pmas->area_num; i++) {
+ pma = &(pmas->mem_area[i]);
+ ret = pin_mem_area(task, mm, pma->virt_start, pma->virt_end);
+ if (ret) {
+ mmput(mm);
+ goto fail;
+ }
+ }
+ mmput(mm);
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return ret;
+
+fail:
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return -EFAULT;
+}
+
+static int set_pin_mem_area(unsigned long arg)
+{
+ struct pin_mem_area_set pmas;
+ void __user *buf = (void __user *)arg;
+
+ if (!access_ok(buf, sizeof(pmas)))
+ return -EFAULT;
+ if (copy_from_user(&pmas, buf, sizeof(pmas)))
+ return -EINVAL;
+ if (pmas.area_num > MAX_PIN_MEM_AREA_NUM) {
+ pr_warn("Input area_num is too large.\n");
+ return -EINVAL;
+ }
+
+ return set_pin_mem(&pmas);
+}
+
+static int pin_mem_remap(unsigned long arg)
+{
+ int pid;
+ struct task_struct *task;
+ struct mm_struct *mm;
+ vm_fault_t ret;
+ void __user *buf = (void __user *)arg;
+ struct pid *pid_s;
+
+ if (!access_ok(buf, sizeof(int)))
+ return -EINVAL;
+ if (copy_from_user(&pid, buf, sizeof(int)))
+ return -EINVAL;
+
+ pid_s = find_get_pid(pid);
+ if (!pid_s) {
+ pr_warn("Get pid struct fail:%d.\n", pid);
+ return -EINVAL;
+ }
+ rcu_read_lock();
+ task = pid_task(pid_s, PIDTYPE_PID);
+ if (!task) {
+ pr_warn("Get task struct fail:%d.\n", pid);
+ goto fault;
+ }
+ mm = get_task_mm(task);
+ ret = do_mem_remap(pid, mm);
+ if (ret) {
+ pr_warn("Handle pin memory remap fail.\n");
+ mmput(mm);
+ goto fault;
+ }
+ mmput(mm);
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return 0;
+
+fault:
+ rcu_read_unlock();
+ put_pid(pid_s);
+ return -EFAULT;
+}
+
+static long pin_memory_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ long ret = 0;
+
+ if (_IOC_TYPE(cmd) != PIN_MEM_MAGIC)
+ return -EINVAL;
+ if (_IOC_NR(cmd) > _PIN_MEM_IOC_MAX_NR)
+ return -EINVAL;
+
+ switch (cmd) {
+ case SET_PIN_MEM_AREA:
+ ret = set_pin_mem_area(arg);
+ break;
+ case CLEAR_PIN_MEM_AREA:
+ clear_pin_memory_record();
+ break;
+ case REMAP_PIN_MEM_AREA:
+ ret = pin_mem_remap(arg);
+ break;
+ case FINISH_PIN_MEM_DUMP:
+ ret = finish_pin_mem_dump();
+ break;
+ default:
+ return -EINVAL;
+ }
+ return ret;
+}
+
+static const struct file_operations pin_memory_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = pin_memory_ioctl,
+ .compat_ioctl = pin_memory_ioctl,
+};
+
+static struct miscdevice pin_memory_miscdev = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "pinmem",
+ .fops = &pin_memory_fops,
+};
+
+static int pin_memory_init(void)
+{
+ int err = misc_register(&pin_memory_miscdev);
+
+ if (!err)
+ pr_info("pin_memory init\n");
+ else
+ pr_warn("pin_memory init failed!\n");
+ return err;
+}
+
+static void pin_memory_exit(void)
+{
+ misc_deregister(&pin_memory_miscdev);
+ pr_info("pin_memory ko exists!\n");
+}
+
+module_init(pin_memory_init);
+module_exit(pin_memory_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Euler");
+MODULE_DESCRIPTION("pin memory");
diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h
index fc0ef33..30f0df3 100644
--- a/include/linux/crash_core.h
+++ b/include/linux/crash_core.h
@@ -87,4 +87,9 @@ int parse_crashkernel_high(char *cmdline, unsigned long long system_ram,
int parse_crashkernel_low(char *cmdline, unsigned long long system_ram,
unsigned long long *crash_size, unsigned long long *crash_base);
+#ifdef CONFIG_PIN_MEMORY
+int __init parse_pin_memory(char *cmdline, unsigned long long system_ram,
+ unsigned long long *pin_size, unsigned long long *pin_base);
+#endif
+
#endif /* LINUX_CRASH_CORE_H */
diff --git a/include/linux/pin_mem.h b/include/linux/pin_mem.h
new file mode 100644
index 0000000..bc8b03e
--- /dev/null
+++ b/include/linux/pin_mem.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020. Huawei Technologies Co., Ltd. All rights reserved.
+ * Provide the pin memory method for check point and restore task.
+ */
+#ifndef _LINUX_PIN_MEMORY_H
+#define _LINUX_PIN_MEMORY_H
+
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/errno.h>
+#include <linux/mm_types.h>
+#include <linux/err.h>
+#ifdef CONFIG_ARM64
+#include <linux/ioport.h>
+#endif
+
+#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)
+
+#define COLLECT_PAGES_FINISH 0
+#define COLLECT_PAGES_NEED_CONTINUE 1
+#define COLLECT_PAGES_FAIL -1
+
+#define COMPOUND_PAD_MASK 0xffffffff
+#define COMPOUND_PAD_START 0x88
+#define COMPOUND_PAD_DELTA 0x40
+#define LIST_POISON4 0xdead000000000400
+#define PAGE_FLAGS_CHECK_RESERVED (1UL << PG_reserved)
+#define SHA256_DIGEST_SIZE 32
+#define next_pme(pme) ((unsigned long *)(pme + 1) + pme->nr_pages)
+#define PIN_MEM_DUMP_MAGIC 0xfeab000000001acd
+struct page_map_entry {
+ unsigned long virt_addr;
+ unsigned int nr_pages;
+ unsigned int is_huge_page;
+ unsigned long redirect_start;
+ unsigned long phy_addr_array[0];
+};
+
+struct page_map_info {
+ int pid;
+ int pid_reserved;
+ unsigned int entry_num;
+ int disable_free_page;
+ struct page_map_entry *pme;
+};
+
+struct pin_mem_dump_info {
+ char sha_digest[SHA256_DIGEST_SIZE];
+ unsigned long magic;
+ unsigned int pin_pid_num;
+ struct page_map_info pmi_array[0];
+};
+
+struct redirect_info {
+ unsigned int redirect_pages;
+ unsigned int redirect_index[0];
+};
+
+extern struct page_map_info *get_page_map_info(int pid);
+extern struct page_map_info *create_page_map_info(int pid);
+extern vm_fault_t do_mem_remap(int pid, struct mm_struct *mm);
+extern vm_fault_t do_anon_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page);
+extern void clear_pin_memory_record(void);
+extern int pin_mem_area(struct task_struct *task, struct mm_struct *mm,
+ unsigned long start_addr, unsigned long end_addr);
+extern vm_fault_t do_anon_huge_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page);
+extern int finish_pin_mem_dump(void);
+
+/* reserve space for pin memory*/
+#ifdef CONFIG_ARM64
+extern struct resource pin_memory_resource;
+#endif
+extern void init_reserve_page_map(unsigned long map_addr, unsigned long map_size);
+
+#endif /* CONFIG_PIN_MEMORY */
+#endif /* _LINUX_PIN_MEMORY_H */
diff --git a/kernel/crash_core.c b/kernel/crash_core.c
index bfed474..2407de3 100644
--- a/kernel/crash_core.c
+++ b/kernel/crash_core.c
@@ -450,6 +450,17 @@ void __init reserve_crashkernel(void)
}
#endif /* CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL */
+#ifdef CONFIG_PIN_MEMORY
+int __init parse_pin_memory(char *cmdline,
+ unsigned long long system_ram,
+ unsigned long long *pin_size,
+ unsigned long long *pin_base)
+{
+ return __parse_crashkernel(cmdline, system_ram, pin_size, pin_base,
+ "pinmemory=", NULL);
+}
+#endif
+
Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,
void *data, size_t data_len)
{
diff --git a/mm/Kconfig b/mm/Kconfig
index 390165f..930dc13 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -859,4 +859,12 @@ config ARCH_HAS_HUGEPD
config MAPPING_DIRTY_HELPERS
bool
+config PIN_MEMORY
+ bool "Support for pin memory"
+ depends on CHECKPOINT_RESTORE
+ help
+ Say y here to enable the pin memory feature for checkpoint
+ and restore. We can pin the memory data of tasks and collect
+ the corresponding physical pages mapping info in checkpoint,
+ and remap the physical pages to restore tasks in restore.
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index d73aed0..4963827 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_PIN_MEMORY) += pin_mem.o
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0bc4a2c..8a11d30 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2996,3 +2996,64 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
update_mmu_cache_pmd(vma, address, pvmw->pmd);
}
#endif
+
+#ifdef CONFIG_PIN_MEMORY
+vm_fault_t do_anon_huge_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page)
+{
+ gfp_t gfp;
+ pgtable_t pgtable;
+ spinlock_t *ptl;
+ pmd_t entry;
+ vm_fault_t ret = 0;
+
+ if (unlikely(anon_vma_prepare(vma)))
+ return VM_FAULT_OOM;
+ if (unlikely(khugepaged_enter(vma, vma->vm_flags)))
+ return VM_FAULT_OOM;
+ gfp = alloc_hugepage_direct_gfpmask(vma);
+ prep_transhuge_page(page);
+ if (mem_cgroup_charge(page, vma->vm_mm, gfp)) {
+ put_page(page);
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_vm_event(THP_FAULT_FALLBACK_CHARGE);
+ return VM_FAULT_FALLBACK;
+ }
+ cgroup_throttle_swaprate(page, gfp);
+
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (unlikely(!pgtable)) {
+ ret = VM_FAULT_OOM;
+ goto release;
+ }
+ __SetPageUptodate(page);
+ ptl = pmd_lock(vma->vm_mm, pmd);
+ if (unlikely(!pmd_none(*pmd))) {
+ goto unlock_release;
+ } else {
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto unlock_release;
+ entry = mk_huge_pmd(page, vma->vm_page_prot);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ page_add_new_anon_rmap(page, vma, address, true);
+ lru_cache_add_inactive_or_unevictable(page, vma);
+ pgtable_trans_huge_deposit(vma->vm_mm, pmd, pgtable);
+ set_pmd_at(vma->vm_mm, address, pmd, entry);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm_inc_nr_ptes(vma->vm_mm);
+ spin_unlock(ptl);
+ count_vm_event(THP_FAULT_ALLOC);
+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+ }
+
+ return 0;
+unlock_release:
+ spin_unlock(ptl);
+release:
+ if (pgtable)
+ pte_free(vma->vm_mm, pgtable);
+ put_page(page);
+ return ret;
+}
+#endif
diff --git a/mm/memory.c b/mm/memory.c
index 50632c4..7b7f1a7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5248,3 +5248,62 @@ void ptlock_free(struct page *page)
kmem_cache_free(page_ptl_cachep, page->ptl);
}
#endif
+
+#ifdef CONFIG_PIN_MEMORY
+vm_fault_t do_anon_page_remap(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, struct page *page)
+{
+ pte_t entry;
+ spinlock_t *ptl;
+ pte_t *pte;
+ vm_fault_t ret = 0;
+
+ if (pte_alloc(vma->vm_mm, pmd))
+ return VM_FAULT_OOM;
+
+ /* See the comment in pte_alloc_one_map() */
+ if (unlikely(pmd_trans_unstable(pmd)))
+ return 0;
+
+ /* Allocate our own private page. */
+ if (unlikely(anon_vma_prepare(vma)))
+ goto oom;
+
+ if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL))
+ goto oom_free_page;
+ cgroup_throttle_swaprate(page, GFP_KERNEL);
+
+ __SetPageUptodate(page);
+
+ entry = mk_pte(page, vma->vm_page_prot);
+ if (vma->vm_flags & VM_WRITE)
+ entry = pte_mkwrite(pte_mkdirty(entry));
+ pte = pte_offset_map_lock(vma->vm_mm, pmd, address,
+ &ptl);
+ if (!pte_none(*pte)) {
+ ret = VM_FAULT_FALLBACK;
+ goto release;
+ }
+
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto release;
+ inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
+ page_add_new_anon_rmap(page, vma, address, false);
+ lru_cache_add_inactive_or_unevictable(page, vma);
+
+ set_pte_at(vma->vm_mm, address, pte, entry);
+ /* No need to invalidate - it was non-present before */
+ update_mmu_cache(vma, address, pte);
+unlock:
+ pte_unmap_unlock(pte, ptl);
+ return ret;
+release:
+ put_page(page);
+ goto unlock;
+oom_free_page:
+ put_page(page);
+oom:
+ return VM_FAULT_OOM;
+}
+#endif
diff --git a/mm/pin_mem.c b/mm/pin_mem.c
new file mode 100644
index 0000000..0a143b6
--- /dev/null
+++ b/mm/pin_mem.c
@@ -0,0 +1,950 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020. Huawei Technologies Co., Ltd. All rights reserved.
+ * Provide the pin memory method for check point and restore task.
+ */
+#ifdef CONFIG_PIN_MEMORY
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+#include <linux/mm.h>
+#include <linux/pin_mem.h>
+#include <linux/idr.h>
+#include <linux/page-isolation.h>
+#include <linux/sched/mm.h>
+#include <linux/ctype.h>
+#include <linux/highmem.h>
+#include <crypto/sha.h>
+
+#define MAX_PIN_PID_NUM 128
+static DEFINE_SPINLOCK(page_map_entry_lock);
+
+struct pin_mem_dump_info *pin_mem_dump_start;
+unsigned int pin_pid_num;
+static unsigned int *pin_pid_num_addr;
+static unsigned long __page_map_entry_start;
+static unsigned long page_map_entry_end;
+static struct page_map_info *user_space_reserve_start;
+static struct page_map_entry *page_map_entry_start;
+unsigned int max_pin_pid_num __read_mostly;
+unsigned long redirect_space_size;
+unsigned long redirect_space_start;
+#define DEFAULT_REDIRECT_SPACE_SIZE 0x100000
+
+static int __init setup_max_pin_pid_num(char *str)
+{
+ int ret = 0;
+
+ if (!str)
+ goto out;
+
+ ret = kstrtouint(str, 10, &max_pin_pid_num);
+out:
+ if (ret) {
+ pr_warn("Unable to parse max pin pid num.\n");
+ } else {
+ if (max_pin_pid_num > MAX_PIN_PID_NUM) {
+ max_pin_pid_num = 0;
+ pr_warn("Input max_pin_pid_num is too large.\n");
+ }
+ }
+ return ret;
+}
+early_param("max_pin_pid_num", setup_max_pin_pid_num);
+
+static int __init setup_redirect_space_size(char *str)
+{
+ if (!str)
+ goto out;
+
+ redirect_space_size = memparse(str, NULL);
+out:
+ if (!redirect_space_size) {
+ pr_warn("Unable to parse redirect space size, use the default value.\n");
+ redirect_space_size = DEFAULT_REDIRECT_SPACE_SIZE;
+ }
+ return 0;
+}
+early_param("redirect_space_size", setup_redirect_space_size);
+
+struct page_map_info *create_page_map_info(int pid)
+{
+ struct page_map_info *new;
+
+ if (!user_space_reserve_start)
+ return NULL;
+
+ if (pin_pid_num >= max_pin_pid_num) {
+ pr_warn("Pin pid num too large than max_pin_pid_num, fail create: %d!", pid);
+ return NULL;
+ }
+ new = (struct page_map_info *)(user_space_reserve_start + pin_pid_num);
+ new->pid = pid;
+ new->pme = NULL;
+ new->entry_num = 0;
+ new->pid_reserved = false;
+ new->disable_free_page = false;
+ (*pin_pid_num_addr)++;
+ pin_pid_num++;
+ return new;
+}
+EXPORT_SYMBOL_GPL(create_page_map_info);
+
+struct page_map_info *get_page_map_info(int pid)
+{
+ int i;
+
+ if (!user_space_reserve_start)
+ return NULL;
+
+ for (i = 0; i < pin_pid_num; i++) {
+ if (user_space_reserve_start[i].pid == pid)
+ return &(user_space_reserve_start[i]);
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(get_page_map_info);
+
+static struct page *find_head_page(struct page *page)
+{
+ struct page *p = page;
+
+ while (!PageBuddy(p)) {
+ if (PageLRU(p))
+ return NULL;
+ p--;
+ }
+ return p;
+}
+
+static void spilt_page_area_left(struct zone *zone, struct free_area *area, struct page *page,
+ unsigned long size, int order)
+{
+ unsigned long cur_size = 1 << order;
+ unsigned long total_size = 0;
+
+ while (size && cur_size > size) {
+ cur_size >>= 1;
+ order--;
+ area--;
+ if (cur_size <= size) {
+ list_add(&page[total_size].lru, &area->free_list[MIGRATE_MOVABLE]);
+ atomic_set(&(page[total_size]._mapcount), PAGE_BUDDY_MAPCOUNT_VALUE);
+ set_page_private(&page[total_size], order);
+ set_pageblock_migratetype(&page[total_size], MIGRATE_MOVABLE);
+ area->nr_free++;
+ total_size += cur_size;
+ size -= cur_size;
+ }
+ }
+}
+
+static void spilt_page_area_right(struct zone *zone, struct free_area *area, struct page *page,
+ unsigned long size, int order)
+{
+ unsigned long cur_size = 1 << order;
+ struct page *right_page, *head_page;
+
+ right_page = page + size;
+ while (size && cur_size > size) {
+ cur_size >>= 1;
+ order--;
+ area--;
+ if (cur_size <= size) {
+ head_page = right_page - cur_size;
+ list_add(&head_page->lru, &area->free_list[MIGRATE_MOVABLE]);
+ atomic_set(&(head_page->_mapcount), PAGE_BUDDY_MAPCOUNT_VALUE);
+ set_page_private(head_page, order);
+ set_pageblock_migratetype(head_page, MIGRATE_MOVABLE);
+ area->nr_free++;
+ size -= cur_size;
+ right_page = head_page;
+ }
+ }
+}
+
+void reserve_page_from_buddy(unsigned long nr_pages, struct page *page)
+{
+ unsigned int current_order;
+ struct page *page_end;
+ struct free_area *area;
+ struct zone *zone;
+ struct page *head_page;
+
+ head_page = find_head_page(page);
+ if (!head_page) {
+ pr_warn("Find page head fail.");
+ return;
+ }
+ current_order = head_page->private;
+ page_end = head_page + (1 << current_order);
+ zone = page_zone(head_page);
+ area = &(zone->free_area[current_order]);
+ list_del(&head_page->lru);
+ atomic_set(&head_page->_mapcount, -1);
+ set_page_private(head_page, 0);
+ area->nr_free--;
+ if (head_page != page)
+ spilt_page_area_left(zone, area, head_page,
+ (unsigned long)(page - head_page), current_order);
+ page = page + nr_pages;
+ if (page < page_end) {
+ spilt_page_area_right(zone, area, page,
+ (unsigned long)(page_end - page), current_order);
+ } else if (page > page_end) {
+ pr_warn("Find page end smaller than page.");
+ }
+}
+
+static inline void reserve_user_normal_pages(struct page *page)
+{
+ atomic_inc(&page->_refcount);
+ reserve_page_from_buddy(1, page);
+}
+
+static void init_huge_pmd_pages(struct page *head_page)
+{
+ int i = 0;
+ struct page *page = head_page;
+
+ __set_bit(PG_head, &page->flags);
+ __set_bit(PG_active, &page->flags);
+ atomic_set(&page->_refcount, 1);
+ page++;
+ i++;
+ page->compound_head = (unsigned long)head_page + 1;
+ page->compound_dtor = HUGETLB_PAGE_DTOR + 1;
+ page->compound_order = HPAGE_PMD_ORDER;
+ page++;
+ i++;
+ page->compound_head = (unsigned long)head_page + 1;
+ i++;
+ INIT_LIST_HEAD(&(page->deferred_list));
+ for (; i < HPAGE_PMD_NR; i++) {
+ page = head_page + i;
+ page->compound_head = (unsigned long)head_page + 1;
+ }
+}
+
+static inline void reserve_user_huge_pmd_pages(struct page *page)
+{
+ atomic_inc(&page->_refcount);
+ reserve_page_from_buddy((1 << HPAGE_PMD_ORDER), page);
+ init_huge_pmd_pages(page);
+}
+
+int reserve_user_map_pages_fail;
+
+void free_user_map_pages(unsigned int pid_index, unsigned int entry_index, unsigned int page_index)
+{
+ unsigned int i, j, index, order;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ struct page *page;
+ unsigned long phy_addr;
+
+ for (index = 0; index < pid_index; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ }
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < entry_index; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ for (j = 0; j < page_index; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+}
+
+bool check_redirect_end_valid(struct redirect_info *redirect_start,
+ unsigned long max_redirect_page_num)
+{
+ unsigned long redirect_end;
+
+ redirect_end = ((unsigned long)(redirect_start + 1) +
+ max_redirect_page_num * sizeof(unsigned int));
+ if (redirect_end > redirect_space_start + redirect_space_size)
+ return false;
+ return false;
+}
+
+static void reserve_user_space_map_pages(void)
+{
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ unsigned int i, j, index;
+ struct page *page;
+ unsigned long flags;
+ unsigned long phy_addr;
+ unsigned long redirect_pages = 0;
+ struct redirect_info *redirect_start = (struct redirect_info *)redirect_space_start;
+
+ if (!user_space_reserve_start || !redirect_start)
+ return;
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ redirect_pages = 0;
+ if (!check_redirect_end_valid(redirect_start, pme->nr_pages))
+ redirect_start = NULL;
+ for (j = 0; j < pme->nr_pages; j++) {
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (atomic_read(&page->_refcount)) {
+ if ((page->flags & PAGE_FLAGS_CHECK_RESERVED)
+ && !pme->redirect_start)
+ pme->redirect_start =
+ (unsigned long)redirect_start;
+ if (redirect_start &&
+ (page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ redirect_start->redirect_index[redirect_pages] = j;
+ redirect_pages++;
+ continue;
+ } else {
+ reserve_user_map_pages_fail = 1;
+ pr_warn("Page %pK refcount %d large than zero, no need reserve.\n",
+ page, atomic_read(&page->_refcount));
+ goto free_pages;
+ }
+ }
+ if (!pme->is_huge_page)
+ reserve_user_normal_pages(page);
+ else
+ reserve_user_huge_pmd_pages(page);
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ if (redirect_pages && redirect_start) {
+ redirect_start->redirect_pages = redirect_pages;
+ redirect_start = (struct redirect_info *)(
+ (unsigned long)(redirect_start + 1) +
+ redirect_start->redirect_pages * sizeof(unsigned int));
+ }
+ }
+ }
+ spin_unlock(&page_map_entry_lock);
+ return;
+free_pages:
+ free_user_map_pages(index, i, j);
+ spin_unlock(&page_map_entry_lock);
+}
+
+
+int calculate_pin_mem_digest(struct pin_mem_dump_info *pmdi, char *digest)
+{
+ int i;
+ struct sha256_state sctx;
+
+ if (!digest)
+ digest = pmdi->sha_digest;
+ sha256_init(&sctx);
+ sha256_update(&sctx, (unsigned char *)(&(pmdi->magic)),
+ sizeof(struct pin_mem_dump_info) - SHA256_DIGEST_SIZE);
+ for (i = 0; i < pmdi->pin_pid_num; i++) {
+ sha256_update(&sctx, (unsigned char *)(&(pmdi->pmi_array[i])),
+ sizeof(struct page_map_info));
+ }
+ sha256_final(&sctx, digest);
+ return 0;
+}
+
+static int check_sha_digest(struct pin_mem_dump_info *pmdi)
+{
+ int ret = 0;
+ char digest[SHA256_DIGEST_SIZE] = {0};
+
+ ret = calculate_pin_mem_digest(pmdi, digest);
+ if (ret) {
+ pr_warn("calculate pin mem digest fail:%d\n", ret);
+ return ret;
+ }
+ if (memcmp(pmdi->sha_digest, digest, SHA256_DIGEST_SIZE)) {
+ pr_warn("pin mem dump info sha256 digest match error!\n");
+ return -EFAULT;
+ }
+ return ret;
+}
+
+/*
+ * The whole page map entry collect process must be Sequentially.
+ * The user_space_reserve_start points to the first page map info for
+ * the first dump task. And the page_map_entry_start points to
+ * the first page map entry of the first dump vma.
+ */
+static void init_page_map_info(struct pin_mem_dump_info *pmdi, unsigned long map_len)
+{
+ if (pin_mem_dump_start || !max_pin_pid_num) {
+ pr_warn("pin page map already init or max_pin_pid_num not set.\n");
+ return;
+ }
+ if (map_len < sizeof(struct pin_mem_dump_info) +
+ max_pin_pid_num * sizeof(struct page_map_info) + redirect_space_size) {
+ pr_warn("pin memory reserved memblock too small.\n");
+ return;
+ }
+ if ((pmdi->magic != PIN_MEM_DUMP_MAGIC) || (pmdi->pin_pid_num > max_pin_pid_num) ||
+ check_sha_digest(pmdi))
+ memset(pmdi, 0, sizeof(struct pin_mem_dump_info));
+ pin_mem_dump_start = pmdi;
+ pin_pid_num = pmdi->pin_pid_num;
+ pr_info("pin_pid_num: %d\n", pin_pid_num);
+ pin_pid_num_addr = &(pmdi->pin_pid_num);
+ user_space_reserve_start =
+ (struct page_map_info *)pmdi->pmi_array;
+ page_map_entry_start =
+ (struct page_map_entry *)(user_space_reserve_start + max_pin_pid_num);
+ page_map_entry_end = (unsigned long)pmdi + map_len - redirect_space_size;
+ redirect_space_start = page_map_entry_end;
+ if (pin_pid_num > 0)
+ reserve_user_space_map_pages();
+}
+
+int finish_pin_mem_dump(void)
+{
+ int ret;
+
+ pin_mem_dump_start->magic = PIN_MEM_DUMP_MAGIC;
+ memset(pin_mem_dump_start->sha_digest, 0, SHA256_DIGEST_SIZE);
+ ret = calculate_pin_mem_digest(pin_mem_dump_start, NULL);
+ if (ret) {
+ pr_warn("calculate pin mem digest fail:%d\n", ret);
+ return ret;
+ }
+ return ret;
+}
+
+int collect_pmd_huge_pages(struct task_struct *task,
+ unsigned long start_addr, unsigned long end_addr, struct page_map_entry *pme)
+{
+ long res;
+ int index = 0;
+ unsigned long start = start_addr;
+ struct page *temp_page;
+
+ while (start < end_addr) {
+ temp_page = NULL;
+ res = get_user_pages_remote(task->mm, start, 1,
+ FOLL_TOUCH | FOLL_GET, &temp_page, NULL, NULL);
+ if (!res) {
+ pr_warn("Get huge page for addr(%lx) fail.", start);
+ return COLLECT_PAGES_FAIL;
+ }
+ if (PageHead(temp_page)) {
+ start += HPAGE_PMD_SIZE;
+ pme->phy_addr_array[index] = page_to_phys(temp_page);
+ index++;
+ } else {
+ pme->nr_pages = index;
+ atomic_dec(&((temp_page)->_refcount));
+ return COLLECT_PAGES_NEED_CONTINUE;
+ }
+ }
+ pme->nr_pages = index;
+ return COLLECT_PAGES_FINISH;
+}
+
+int collect_normal_pages(struct task_struct *task,
+ unsigned long start_addr, unsigned long end_addr, struct page_map_entry *pme)
+{
+ int res;
+ unsigned long next;
+ unsigned long i, nr_pages;
+ struct page *tmp_page;
+ unsigned long *phy_addr_array = pme->phy_addr_array;
+ struct page **page_array = (struct page **)pme->phy_addr_array;
+
+ next = (start_addr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
+ next = (next > end_addr) ? end_addr : next;
+ pme->nr_pages = 0;
+ while (start_addr < next) {
+ nr_pages = (PAGE_ALIGN(next) - start_addr) / PAGE_SIZE;
+ res = get_user_pages_remote(task->mm, start_addr, 1,
+ FOLL_TOUCH | FOLL_GET, &tmp_page, NULL, NULL);
+ if (!res) {
+ pr_warn("Get user page of %lx fail.\n", start_addr);
+ return COLLECT_PAGES_FAIL;
+ }
+ if (PageHead(tmp_page)) {
+ atomic_dec(&(tmp_page->_refcount));
+ return COLLECT_PAGES_NEED_CONTINUE;
+ }
+ atomic_dec(&(tmp_page->_refcount));
+ if (PageTail(tmp_page)) {
+ start_addr = next;
+ pme->virt_addr = start_addr;
+ next = (next + HPAGE_PMD_SIZE) > end_addr ?
+ end_addr : (next + HPAGE_PMD_SIZE);
+ continue;
+ }
+ res = get_user_pages_remote(task->mm, start_addr, nr_pages,
+ FOLL_TOUCH | FOLL_GET, page_array, NULL, NULL);
+ if (!res) {
+ pr_warn("Get user pages of %lx fail.\n", start_addr);
+ return COLLECT_PAGES_FAIL;
+ }
+ for (i = 0; i < nr_pages; i++)
+ phy_addr_array[i] = page_to_phys(page_array[i]);
+ pme->nr_pages += nr_pages;
+ page_array += nr_pages;
+ phy_addr_array += nr_pages;
+ start_addr = next;
+ next = (next + HPAGE_PMD_SIZE) > end_addr ? end_addr : (next + HPAGE_PMD_SIZE);
+ }
+ return COLLECT_PAGES_FINISH;
+}
+
+/* Users make sure that the pin memory belongs to anonymous vma. */
+int pin_mem_area(struct task_struct *task, struct mm_struct *mm,
+ unsigned long start_addr, unsigned long end_addr)
+{
+ int pid, ret;
+ int is_huge_page = false;
+ unsigned int page_size;
+ unsigned long nr_pages, flags;
+ struct page_map_entry *pme;
+ struct page_map_info *pmi;
+ struct vm_area_struct *vma;
+ unsigned long i;
+ struct page *tmp_page;
+
+ if (!page_map_entry_start
+ || !task || !mm
+ || start_addr >= end_addr)
+ return -EFAULT;
+
+ pid = task->pid;
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ nr_pages = ((end_addr - start_addr) / PAGE_SIZE);
+ if ((unsigned long)page_map_entry_start + nr_pages * sizeof(struct page *) >=
+ page_map_entry_end) {
+ pr_warn("Page map entry use up!\n");
+ ret = -EFAULT;
+ goto finish;
+ }
+ vma = find_extend_vma(mm, start_addr);
+ if (!vma) {
+ pr_warn("Find no match vma!\n");
+ ret = -EFAULT;
+ goto finish;
+ }
+ if (start_addr == (start_addr & HPAGE_PMD_MASK) &&
+ transparent_hugepage_enabled(vma)) {
+ page_size = HPAGE_PMD_SIZE;
+ is_huge_page = true;
+ } else {
+ page_size = PAGE_SIZE;
+ }
+ pme = page_map_entry_start;
+ pme->virt_addr = start_addr;
+ pme->redirect_start = 0;
+ pme->is_huge_page = is_huge_page;
+ memset(pme->phy_addr_array, 0, nr_pages * sizeof(unsigned long));
+ down_write(&mm->mmap_lock);
+ if (!is_huge_page) {
+ ret = collect_normal_pages(task, start_addr, end_addr, pme);
+ if (ret != COLLECT_PAGES_FAIL && !pme->nr_pages) {
+ if (ret == COLLECT_PAGES_FINISH) {
+ ret = 0;
+ up_write(&mm->mmap_lock);
+ goto finish;
+ }
+ pme->is_huge_page = true;
+ page_size = HPAGE_PMD_SIZE;
+ ret = collect_pmd_huge_pages(task, pme->virt_addr, end_addr, pme);
+ }
+ } else {
+ ret = collect_pmd_huge_pages(task, start_addr, end_addr, pme);
+ if (ret != COLLECT_PAGES_FAIL && !pme->nr_pages) {
+ if (ret == COLLECT_PAGES_FINISH) {
+ ret = 0;
+ up_write(&mm->mmap_lock);
+ goto finish;
+ }
+ pme->is_huge_page = false;
+ page_size = PAGE_SIZE;
+ ret = collect_normal_pages(task, pme->virt_addr, end_addr, pme);
+ }
+ }
+ up_write(&mm->mmap_lock);
+ if (ret == COLLECT_PAGES_FAIL) {
+ ret = -EFAULT;
+ goto finish;
+ }
+
+ /* check for zero pages */
+ for (i = 0; i < pme->nr_pages; i++) {
+ tmp_page = phys_to_page(pme->phy_addr_array[i]);
+ if (!pme->is_huge_page) {
+ if (page_to_pfn(tmp_page) == my_zero_pfn(pme->virt_addr + i * PAGE_SIZE))
+ pme->phy_addr_array[i] = 0;
+ } else if (is_huge_zero_page(tmp_page))
+ pme->phy_addr_array[i] = 0;
+ }
+
+ page_map_entry_start = (struct page_map_entry *)(next_pme(pme));
+ pmi = get_page_map_info(pid);
+ if (!pmi)
+ pmi = create_page_map_info(pid);
+ if (!pmi) {
+ pr_warn("Create page map info fail for pid: %d!\n", pid);
+ ret = -EFAULT;
+ goto finish;
+ }
+ if (!pmi->pme)
+ pmi->pme = pme;
+ pmi->entry_num++;
+ spin_unlock_irqrestore(&page_map_entry_lock, flags);
+ if (ret == COLLECT_PAGES_NEED_CONTINUE)
+ ret = pin_mem_area(task, mm, pme->virt_addr + pme->nr_pages * page_size, end_addr);
+ return ret;
+finish:
+ spin_unlock_irqrestore(&page_map_entry_lock, flags);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(pin_mem_area);
+
+vm_fault_t remap_normal_pages(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page_map_entry *pme)
+{
+ int ret;
+ unsigned int j, i;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pmd_t *pmd;
+ pud_t *pud;
+ struct page *page, *new;
+ unsigned long address;
+ unsigned long phy_addr;
+ unsigned int redirect_pages = 0;
+ struct redirect_info *redirect_start;
+
+ redirect_start = (struct redirect_info *)pme->redirect_start;
+ for (j = 0; j < pme->nr_pages; j++) {
+ address = pme->virt_addr + j * PAGE_SIZE;
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (page_to_pfn(page) == my_zero_pfn(address)) {
+ pme->phy_addr_array[j] = 0;
+ continue;
+ }
+ pme->phy_addr_array[j] = 0;
+ if (redirect_start && (redirect_pages < redirect_start->redirect_pages) &&
+ (j == redirect_start->redirect_index[redirect_pages])) {
+ new = alloc_zeroed_user_highpage_movable(vma, address);
+ if (!new) {
+ pr_warn("Redirect alloc page fail\n");
+ continue;
+ }
+ copy_page(page_to_virt(new), phys_to_virt(phy_addr));
+ page = new;
+ redirect_pages++;
+ }
+ page->mapping = NULL;
+ pgd = pgd_offset(mm, address);
+ p4d = p4d_alloc(mm, pgd, address);
+ if (!p4d) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pud = pud_alloc(mm, p4d, address);
+ if (!pud) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pmd = pmd_alloc(mm, pud, address);
+ if (!pmd) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ ret = do_anon_page_remap(vma, address, pmd, page);
+ if (ret)
+ goto free;
+ }
+ return 0;
+free:
+ for (i = j; i < pme->nr_pages; i++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ __free_page(phys_to_page(phy_addr));
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ return ret;
+}
+
+static inline gfp_t get_hugepage_gfpmask(struct vm_area_struct *vma)
+{
+ const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
+
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ __GFP_KSWAPD_RECLAIM);
+ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
+ return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
+ 0);
+ return GFP_TRANSHUGE_LIGHT;
+}
+
+vm_fault_t remap_huge_pmd_pages(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page_map_entry *pme)
+{
+ int ret;
+ unsigned int j, i;
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pmd_t *pmd;
+ pud_t *pud;
+ gfp_t gfp;
+ struct page *page, *new;
+ unsigned long address;
+ unsigned long phy_addr;
+ unsigned int redirect_pages = 0;
+ struct redirect_info *redirect_start;
+
+ redirect_start = (struct redirect_info *)pme->redirect_start;
+ for (j = 0; j < pme->nr_pages; j++) {
+ address = pme->virt_addr + j * HPAGE_PMD_SIZE;
+ phy_addr = pme->phy_addr_array[j];
+ if (!phy_addr)
+ continue;
+ page = phys_to_page(phy_addr);
+ if (is_huge_zero_page(page)) {
+ pme->phy_addr_array[j] = 0;
+ continue;
+ }
+ pme->phy_addr_array[j] = 0;
+ if (redirect_start && (redirect_pages < redirect_start->redirect_pages) &&
+ (j == redirect_start->redirect_index[redirect_pages])) {
+ gfp = get_hugepage_gfpmask(vma);
+ new = alloc_hugepage_vma(gfp, vma, address, HPAGE_PMD_ORDER);
+ if (!new) {
+ pr_warn("Redirect alloc huge page fail\n");
+ continue;
+ }
+ memcpy(page_to_virt(new), phys_to_virt(phy_addr), HPAGE_PMD_SIZE);
+ page = new;
+ redirect_pages++;
+ }
+ pgd = pgd_offset(mm, address);
+ p4d = p4d_alloc(mm, pgd, address);
+ if (!p4d) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pud = pud_alloc(mm, p4d, address);
+ if (!pud) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ pmd = pmd_alloc(mm, pud, address);
+ if (!pmd) {
+ ret = VM_FAULT_OOM;
+ goto free;
+ }
+ ret = do_anon_huge_page_remap(vma, address, pmd, page);
+ if (ret)
+ goto free;
+ }
+ return 0;
+free:
+ for (i = j; i < pme->nr_pages; i++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, HPAGE_PMD_ORDER);
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ }
+ return ret;
+}
+
+static void free_unmap_pages(struct page_map_info *pmi,
+ struct page_map_entry *pme,
+ unsigned int index)
+{
+ unsigned int i, j;
+ unsigned long phy_addr;
+ unsigned int order;
+ struct page *page;
+
+ pme = (struct page_map_entry *)(next_pme(pme));
+ for (i = index; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ phy_addr = pme->phy_addr_array[i];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[i] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)(next_pme(pme));
+ }
+}
+
+vm_fault_t do_mem_remap(int pid, struct mm_struct *mm)
+{
+ unsigned int i = 0;
+ vm_fault_t ret = 0;
+ struct vm_area_struct *vma;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ unsigned long flags;
+
+ if (reserve_user_map_pages_fail)
+ return -EFAULT;
+ pmi = get_page_map_info(pid);
+ if (!pmi)
+ return -EFAULT;
+
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ pmi->disable_free_page = true;
+ spin_unlock(&page_map_entry_lock);
+ down_write(&mm->mmap_lock);
+ pme = pmi->pme;
+ vma = mm->mmap;
+ while ((i < pmi->entry_num) && (vma != NULL)) {
+ if (pme->virt_addr >= vma->vm_start && pme->virt_addr < vma->vm_end) {
+ i++;
+ if (!vma_is_anonymous(vma)) {
+ pme = (struct page_map_entry *)(next_pme(pme));
+ continue;
+ }
+ if (!pme->is_huge_page) {
+ ret = remap_normal_pages(mm, vma, pme);
+ if (ret < 0)
+ goto free;
+ } else {
+ ret = remap_huge_pmd_pages(mm, vma, pme);
+ if (ret < 0)
+ goto free;
+ }
+ pme = (struct page_map_entry *)(next_pme(pme));
+ } else {
+ vma = vma->vm_next;
+ }
+ }
+ up_write(&mm->mmap_lock);
+ return 0;
+free:
+ free_unmap_pages(pmi, pme, i);
+ up_write(&mm->mmap_lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(do_mem_remap);
+
+#if defined(CONFIG_ARM64)
+void init_reserve_page_map(unsigned long map_addr, unsigned long map_size)
+{
+ void *addr;
+
+ if (!map_addr || !map_size)
+ return;
+ addr = phys_to_virt(map_addr);
+ init_page_map_info((struct pin_mem_dump_info *)addr, map_size);
+}
+#else
+void init_reserve_page_map(unsigned long map_addr, unsigned long map_size)
+{
+}
+#endif
+
+static void free_all_reserved_pages(void)
+{
+ unsigned int i, j, index, order;
+ struct page_map_info *pmi;
+ struct page_map_entry *pme;
+ struct page *page;
+ unsigned long phy_addr;
+
+ if (!user_space_reserve_start || reserve_user_map_pages_fail)
+ return;
+
+ for (index = 0; index < pin_pid_num; index++) {
+ pmi = &(user_space_reserve_start[index]);
+ if (pmi->disable_free_page)
+ continue;
+ pme = pmi->pme;
+ for (i = 0; i < pmi->entry_num; i++) {
+ for (j = 0; j < pme->nr_pages; j++) {
+ order = pme->is_huge_page ? HPAGE_PMD_ORDER : 0;
+ phy_addr = pme->phy_addr_array[j];
+ if (phy_addr) {
+ page = phys_to_page(phy_addr);
+ if (!(page->flags & PAGE_FLAGS_CHECK_RESERVED)) {
+ __free_pages(page, order);
+ pme->phy_addr_array[j] = 0;
+ }
+ }
+ }
+ pme = (struct page_map_entry *)next_pme(pme);
+ }
+ }
+}
+
+/* Clear all pin memory record. */
+void clear_pin_memory_record(void)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&page_map_entry_lock, flags);
+ free_all_reserved_pages();
+ if (pin_pid_num_addr) {
+ *pin_pid_num_addr = 0;
+ pin_pid_num = 0;
+ page_map_entry_start = (struct page_map_entry *)__page_map_entry_start;
+ }
+ spin_unlock(&page_map_entry_lock);
+}
+EXPORT_SYMBOL_GPL(clear_pin_memory_record);
+
+#endif /* CONFIG_PIN_MEMORY */
--
2.9.5
1
1

[PATCH OLK-5.10 v1] arm64: Declare var of local_cpu_stop only on PARK
by sangyan@huawei.com 01 Mar '21
by sangyan@huawei.com 01 Mar '21
01 Mar '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
Fix compile warning: unused variable 'ops' 'cpu'
while CONFIG_ARM64_CPU_PARK=n.
Put declaration of 'ops' and 'cpu' under
CONFIG_ARM64_CPU_PARK in local_cpu_stop.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/kernel/smp.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 644bbd7..d7b750a 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -1024,8 +1024,10 @@ void arch_irq_work_raise(void)
static void local_cpu_stop(void)
{
+#ifdef CONFIG_ARM64_CPU_PARK
int cpu;
const struct cpu_operations *ops = NULL;
+#endif
set_cpu_online(smp_processor_id(), false);
--
2.9.5
1
0

26 Feb '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
Introducing a feature of CPU PARK in order to save time
of cpus down and up during kexec, which may cost 250ms of
per cpu's down and 30ms of up.
As a result, for 128 cores, it costs more than 30 seconds
to down and up cpus during kexec. Think about 256 cores and more.
CPU PARK is a state that cpu power-on and staying in spin loop, polling
for exit chances, such as writing exit address.
Reserving a block of memory, to fill with cpu park text section,
exit address and park-magic-flag of each cpu. In implementation,
reserved one page for one cpu core.
Cpus going to park state instead of down in machine_shutdown().
Cpus going out of park state in smp_init instead of brought up.
One of cpu park sections in pre-reserved memory blocks,:
+--------------+
+ exit address +
+--------------+
+ park magic +
+--------------+
+ park codes +
+ . +
+ . +
+ . +
+--------------+
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
Reviewed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
---
arch/arm64/Kconfig | 12 ++
arch/arm64/include/asm/kexec.h | 6 +
arch/arm64/include/asm/smp.h | 15 +++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/cpu-park.S | 59 ++++++++++
arch/arm64/kernel/machine_kexec.c | 2 +-
arch/arm64/kernel/process.c | 4 +
arch/arm64/kernel/smp.c | 230 ++++++++++++++++++++++++++++++++++++++
arch/arm64/mm/init.c | 55 +++++++++
9 files changed, 383 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/cpu-park.S
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b9c5654..0885668 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -345,6 +345,18 @@ config KASAN_SHADOW_OFFSET
default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
+config ARM64_CPU_PARK
+ bool "Support CPU PARK on kexec"
+ depends on SMP
+ depends on KEXEC_CORE
+ help
+ This enables support for CPU PARK feature in
+ order to save time of cpu down to up.
+ CPU park is a state through kexec, spin loop
+ instead of cpu die before jumping to new kernel,
+ jumping out from loop to new kernel entry in
+ smp_init.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 79909ae..a133889 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -36,6 +36,11 @@
#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE
+#ifdef CONFIG_ARM64_CPU_PARK
+/* CPU park state flag: "park" */
+#define PARK_MAGIC 0x7061726b
+#endif
+
#ifndef __ASSEMBLY__
/**
@@ -104,6 +109,7 @@ static inline void crash_post_resume(void) {}
#ifdef CONFIG_KEXEC_CORE
extern void __init reserve_crashkernel(void);
#endif
+void machine_kexec_mask_interrupts(void);
#ifdef CONFIG_KEXEC_FILE
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 2e7f529..8c5d2d6 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -145,6 +145,21 @@ bool cpus_are_stuck_in_kernel(void);
extern void crash_smp_send_stop(void);
extern bool smp_crash_stop_failed(void);
+#ifdef CONFIG_ARM64_CPU_PARK
+#define PARK_SECTION_SIZE 1024
+struct cpu_park_info {
+ /* Physical address of reserved park memory. */
+ unsigned long start;
+ /* park reserve mem len should be PARK_SECTION_SIZE * NR_CPUS */
+ unsigned long len;
+ /* Virtual address of reserved park memory. */
+ unsigned long start_v;
+};
+extern struct cpu_park_info park_info;
+extern void enter_cpu_park(unsigned long text, unsigned long exit);
+extern void do_cpu_park(unsigned long exit);
+extern int kexec_smp_send_park(void);
+#endif
#endif /* ifndef __ASSEMBLY__ */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2621d5c..60478d2 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -54,6 +54,7 @@ obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \
cpu-reset.o
+obj-$(CONFIG_ARM64_CPU_PARK) += cpu-park.o
obj-$(CONFIG_KEXEC_FILE) += machine_kexec_file.o kexec_image.o
obj-$(CONFIG_ARM64_RELOC_TEST) += arm64-reloc-test.o
arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
diff --git a/arch/arm64/kernel/cpu-park.S b/arch/arm64/kernel/cpu-park.S
new file mode 100644
index 0000000..10c685c
--- /dev/null
+++ b/arch/arm64/kernel/cpu-park.S
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * CPU park routines
+ *
+ * Copyright (C) 2020 Huawei Technologies., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/kexec.h>
+#include <asm/sysreg.h>
+#include <asm/virt.h>
+
+.text
+.pushsection .idmap.text, "awx"
+
+/* cpu park helper in idmap section */
+SYM_CODE_START(enter_cpu_park)
+ /* Clear sctlr_el1 flags. */
+ mrs x12, sctlr_el1
+ mov_q x13, SCTLR_ELx_FLAGS
+ bic x12, x12, x13
+ pre_disable_mmu_workaround
+ msr sctlr_el1, x12 /* disable mmu */
+ isb
+
+ mov x18, x0
+ mov x0, x1 /* secondary_entry addr */
+ br x18 /* call do_cpu_park of each cpu */
+SYM_CODE_END(enter_cpu_park)
+
+.popsection
+
+SYM_CODE_START(do_cpu_park)
+ ldr x18, =PARK_MAGIC /* magic number "park" */
+ add x1, x0, #8
+ str x18, [x1] /* set on-park flag */
+ dc civac, x1 /* flush cache of "park" */
+ dsb nsh
+ isb
+
+.Lloop:
+ wfe
+ isb
+ ldr x19, [x0]
+ cmp x19, #0 /* test secondary_entry */
+ b.eq .Lloop
+
+ ic iallu /* invalidate the local I-cache */
+ dsb nsh
+ isb
+
+ br x19 /* jump to secondary_entry */
+SYM_CODE_END(do_cpu_park)
+
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144c..f47ce96 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -213,7 +213,7 @@ void machine_kexec(struct kimage *kimage)
BUG(); /* Should never get here. */
}
-static void machine_kexec_mask_interrupts(void)
+void machine_kexec_mask_interrupts(void)
{
unsigned int i;
struct irq_desc *desc;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 73e3b32..10cffee 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -146,6 +146,10 @@ void arch_cpu_idle_dead(void)
*/
void machine_shutdown(void)
{
+#ifdef CONFIG_ARM64_CPU_PARK
+ if (kexec_smp_send_park() == 0)
+ return;
+#endif
smp_shutdown_nonboot_cpus(reboot_cpu);
}
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 18e9727..dea67d0 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -32,6 +32,7 @@
#include <linux/irq_work.h>
#include <linux/kernel_stat.h>
#include <linux/kexec.h>
+
#include <linux/kvm_host.h>
#include <asm/alternative.h>
@@ -93,6 +94,167 @@ static inline int op_cpu_kill(unsigned int cpu)
}
#endif
+#ifdef CONFIG_ARM64_CPU_PARK
+struct cpu_park_section {
+ unsigned long exit; /* exit address of park look */
+ unsigned long magic; /* maigc represent park state */
+ char text[0]; /* text section of park */
+};
+
+static int mmap_cpu_park_mem(void)
+{
+ if (!park_info.start)
+ return -ENOMEM;
+
+ if (park_info.start_v)
+ return 0;
+
+ park_info.start_v = (unsigned long)__ioremap(park_info.start,
+ park_info.len,
+ PAGE_KERNEL_EXEC);
+ if (!park_info.start_v) {
+ pr_warn("map park memory failed.");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static inline unsigned long cpu_park_section_v(unsigned int cpu)
+{
+ return park_info.start_v + PARK_SECTION_SIZE * (cpu - 1);
+}
+
+static inline unsigned long cpu_park_section_p(unsigned int cpu)
+{
+ return park_info.start + PARK_SECTION_SIZE * (cpu - 1);
+}
+
+/*
+ * Write the secondary_entry to exit section of park state.
+ * Then the secondary cpu will jump straight into the kernel
+ * by the secondary_entry.
+ */
+static int write_park_exit(unsigned int cpu)
+{
+ struct cpu_park_section *park_section;
+ unsigned long *park_exit;
+ unsigned long *park_text;
+
+ if (mmap_cpu_park_mem() != 0)
+ return -EPERM;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ park_exit = &park_section->exit;
+ park_text = (unsigned long *)park_section->text;
+ pr_debug("park_text 0x%lx : 0x%lx, do_cpu_park text 0x%lx : 0x%lx",
+ (unsigned long)park_text, *park_text,
+ (unsigned long)do_cpu_park,
+ *(unsigned long *)do_cpu_park);
+
+ /*
+ * Test first 8 bytes to determine
+ * whether needs to write cpu park exit.
+ */
+ if (*park_text == *(unsigned long *)do_cpu_park) {
+ writeq_relaxed(__pa_symbol(secondary_entry), park_exit);
+ __flush_dcache_area((__force void *)park_exit,
+ sizeof(unsigned long));
+ flush_icache_range((unsigned long)park_exit,
+ (unsigned long)(park_exit + 1));
+ sev();
+ dsb(sy);
+ isb();
+
+ pr_debug("Write cpu %u secondary entry 0x%lx to 0x%lx.",
+ cpu, *park_exit, (unsigned long)park_exit);
+ pr_info("Boot cpu %u from PARK state.", cpu);
+ return 0;
+ }
+
+ return -EPERM;
+}
+
+/* Install cpu park sections for the specific cpu. */
+static int install_cpu_park(unsigned int cpu)
+{
+ struct cpu_park_section *park_section;
+ unsigned long *park_exit;
+ unsigned long *park_magic;
+ unsigned long park_text_len;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ pr_debug("Install cpu park on cpu %u park exit 0x%lx park text 0x%lx",
+ cpu, (unsigned long)park_section,
+ (unsigned long)(park_section->text));
+
+ park_exit = &park_section->exit;
+ park_magic = &park_section->magic;
+ park_text_len = PARK_SECTION_SIZE - sizeof(struct cpu_park_section);
+
+ *park_exit = 0UL;
+ *park_magic = 0UL;
+ memcpy((void *)park_section->text, do_cpu_park, park_text_len);
+ __flush_dcache_area((void *)park_section, PARK_SECTION_SIZE);
+
+ return 0;
+}
+
+static int uninstall_cpu_park(unsigned int cpu)
+{
+ unsigned long park_section;
+
+ if (mmap_cpu_park_mem() != 0)
+ return -EPERM;
+
+ park_section = cpu_park_section_v(cpu);
+ memset((void *)park_section, 0, PARK_SECTION_SIZE);
+ __flush_dcache_area((void *)park_section, PARK_SECTION_SIZE);
+
+ return 0;
+}
+
+static int cpu_wait_park(unsigned int cpu)
+{
+ long timeout;
+ struct cpu_park_section *park_section;
+
+ volatile unsigned long *park_magic;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ park_magic = &park_section->magic;
+
+ timeout = USEC_PER_SEC;
+ while (*park_magic != PARK_MAGIC && timeout--)
+ udelay(1);
+
+ if (timeout > 0)
+ pr_debug("cpu %u park done.", cpu);
+ else
+ pr_err("cpu %u park failed.", cpu);
+
+ return *park_magic == PARK_MAGIC;
+}
+
+static void cpu_park(unsigned int cpu)
+{
+ unsigned long park_section_p;
+ unsigned long park_exit_phy;
+ unsigned long do_park;
+ typeof(enter_cpu_park) *park;
+
+ park_section_p = cpu_park_section_p(cpu);
+ park_exit_phy = park_section_p;
+ pr_debug("Go to park cpu %u exit address 0x%lx", cpu, park_exit_phy);
+
+ do_park = park_section_p + sizeof(struct cpu_park_section);
+ park = (void *)__pa_symbol(enter_cpu_park);
+
+ cpu_install_idmap();
+ park(do_park, park_exit_phy);
+ unreachable();
+}
+#endif
/*
* Boot a secondary CPU, and assign it the specified idle task.
@@ -102,6 +264,10 @@ static int boot_secondary(unsigned int cpu, struct task_struct *idle)
{
const struct cpu_operations *ops = get_cpu_ops(cpu);
+#ifdef CONFIG_ARM64_CPU_PARK
+ if (write_park_exit(cpu) == 0)
+ return 0;
+#endif
if (ops->cpu_boot)
return ops->cpu_boot(cpu);
@@ -131,6 +297,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+#ifdef CONFIG_ARM64_CPU_PARK
+ uninstall_cpu_park(cpu);
+#endif
/*
* CPU was successfully started, wait for it to come online or
* time out.
@@ -844,10 +1013,32 @@ void arch_irq_work_raise(void)
static void local_cpu_stop(void)
{
+#ifdef CONFIG_ARM64_CPU_PARK
+ int cpu;
+ const struct cpu_operations *ops = NULL;
+#endif
+
set_cpu_online(smp_processor_id(), false);
local_daif_mask();
sdei_mask_local_cpu();
+
+#ifdef CONFIG_ARM64_CPU_PARK
+ /*
+ * Go to cpu park state.
+ * Otherwise go to cpu die.
+ */
+ cpu = smp_processor_id();
+ if (kexec_in_progress && park_info.start_v) {
+ machine_kexec_mask_interrupts();
+ cpu_park(cpu);
+
+ ops = get_cpu_ops(cpu);
+ if (ops && ops->cpu_die)
+ ops->cpu_die(cpu);
+ }
+#endif
+
cpu_park_loop();
}
@@ -1053,6 +1244,45 @@ void smp_send_stop(void)
sdei_mask_local_cpu();
}
+#ifdef CONFIG_ARM64_CPU_PARK
+int kexec_smp_send_park(void)
+{
+ unsigned long cpu;
+
+ if (WARN_ON(!kexec_in_progress)) {
+ pr_crit("%s called not in kexec progress.", __func__);
+ return -EPERM;
+ }
+
+ if (mmap_cpu_park_mem() != 0) {
+ pr_info("no cpuparkmem, goto normal way.");
+ return -EPERM;
+ }
+
+ local_irq_disable();
+
+ if (num_online_cpus() > 1) {
+ cpumask_t mask;
+
+ cpumask_copy(&mask, cpu_online_mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
+
+ for_each_cpu(cpu, &mask)
+ install_cpu_park(cpu);
+ smp_cross_call(&mask, IPI_CPU_STOP);
+
+ /* Wait for other CPUs to park */
+ for_each_cpu(cpu, &mask)
+ cpu_wait_park(cpu);
+ pr_info("smp park other cpus done\n");
+ }
+
+ sdei_mask_local_cpu();
+
+ return 0;
+}
+#endif
+
#ifdef CONFIG_KEXEC_CORE
void crash_smp_send_stop(void)
{
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 794f992..d01259c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -236,6 +236,57 @@ static void __init fdt_enforce_memory_region(void)
memblock_add(usable_rgns[1].base, usable_rgns[1].size);
}
+#ifdef CONFIG_ARM64_CPU_PARK
+struct cpu_park_info park_info = {
+ .start = 0,
+ .len = PARK_SECTION_SIZE * NR_CPUS,
+ .start_v = 0,
+};
+
+static int __init parse_park_mem(char *p)
+{
+ if (!p)
+ return 0;
+
+ park_info.start = PAGE_ALIGN(memparse(p, NULL));
+ if (park_info.start == 0)
+ pr_info("cpu park mem params[%s]", p);
+
+ return 0;
+}
+early_param("cpuparkmem", parse_park_mem);
+
+static int __init reserve_park_mem(void)
+{
+ if (park_info.start == 0 || park_info.len == 0)
+ return 0;
+
+ park_info.start = PAGE_ALIGN(park_info.start);
+ park_info.len = PAGE_ALIGN(park_info.len);
+
+ if (!memblock_is_region_memory(park_info.start, park_info.len)) {
+ pr_warn("cannot reserve park mem: region is not memory!");
+ goto out;
+ }
+
+ if (memblock_is_region_reserved(park_info.start, park_info.len)) {
+ pr_warn("cannot reserve park mem: region overlaps reserved memory!");
+ goto out;
+ }
+
+ memblock_remove(park_info.start, park_info.len);
+ pr_info("cpu park mem reserved: 0x%016lx - 0x%016lx (%ld MB)",
+ park_info.start, park_info.start + park_info.len,
+ park_info.len >> 20);
+
+ return 0;
+out:
+ park_info.start = 0;
+ park_info.len = 0;
+ return -EINVAL;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -357,6 +408,10 @@ void __init arm64_memblock_init(void)
reserve_crashkernel();
+#ifdef CONFIG_ARM64_CPU_PARK
+ reserve_park_mem();
+#endif
+
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
--
2.9.5
1
1
MPAM bugfix @ 20210224
James Morse (10):
arm64/mpam: Add mpam driver discovery phase and kbuild boiler plate
cacheinfo: Provide a helper to find a cacheinfo leaf
arm64/mpam: Probe supported partid/pmg ranges from devices
arm64/mpam: Supplement MPAM MSC register layout definitions
arm64/mpam: Probe the features resctrl supports
arm64/mpam: Reset controls when CPUs come online
arm64/mpam: Summarize feature support during mpam_enable()
arm64/mpam: resctrl: Re-synchronise resctrl's view of online CPUs
drivers: base: cacheinfo: Add helper to search cacheinfo by of_node
arm64/mpam: Enabling registering and logging error interrupts
Wang ShaoBo (55):
arm64/mpam: Preparing for MPAM refactoring
arm64/mpam: Add helper for getting mpam sysprops
arm64/mpam: Allocate mpam component configuration arrays
arm64/mpam: Pick MPAM resources and events for resctrl_res exported
arm64/mpam: Init resctrl resources' info from resctrl_res selected
arm64/mpam: resctrl: Handle cpuhp and resctrl_dom allocation
arm64/mpam: Implement helpers for handling configuration and
monitoring
arm64/mpam: Migrate old MSCs' discovery process to new branch
arm64/mpam: Add helper for getting MSCs' configuration
arm64/mpam: Probe partid,pmg and feature capabilities' ranges from
classes
arm64/mpam: resctrl: Rebuild configuration and monitoring pipeline
arm64/mpam: resctrl: Append schemata CDP definitions
arm64/mpam: resctrl: Supplement cdpl2,cdpl3 for mount options
arm64/mpam: resctrl: Add helpers for init and destroy schemata list
arm64/mpam: resctrl: Use resctrl_group_init_alloc() to init schema
list
arm64/mpam: resctrl: Write and read schemata by schema_list
arm64/mpam: Support cdp in mpam_sched_in()
arm64/mpam: resctrl: Update resources reset process
arm64/mpam: resctrl: Update closid alloc and free process with bitmap
arm64/mpam: resctrl: Move ctrlmon sysfile write/read function to
mpam_ctrlmon.c
arm64/mpam: Support cdp on allocating monitors
arm64/mpam: resctrl: Support cdp on monitoring data
arm64/mpam: Clean up header files and rearrange declarations
arm64/mpam: resctrl: Remove ctrlmon sysfile
arm64/mpam: resctrl: Remove unnecessary CONFIG_ARM64
arm64/mpam: Implement intpartid narrowing process
arm64/mpam: Using software-defined id for rdtgroup instead of 32-bit
integer
arm64/mpam: resctrl: collect child mon group's monitor data
arm64/mpam: resctrl: Support cpus' monitoring for mon group
arm64/mpam: resctrl: Support priority and hardlimit(Memory bandwidth)
configuration
arm64/mpam: Store intpri and dspri for mpam device reset
arm64/mpam: Squash default priority from mpam device to class
arm64/mpam: Restore extend ctrls' max width for checking schemata
input
arm64/mpam: Re-plan intpartid narrowing process
arm64/mpam: Add hook-events id for ctrl features
arm64/mpam: Integrate monitor data for Memory Bandwidth if cdp enabled
arm64/mpam: Fix MPAM_ESR intPARTID_range error
arm64/mpam: Separate internal and downstream priority event
arm64/mpam: Remap reqpartid,pmg to rmid and intpartid to closid
arm64/mpam: Add wait queue for monitor alloc and free
arm64/mpam: Add resctrl_ctrl_feature structure to manage ctrl features
arm64/mpam: resctrl: Export resource's properties to info directory
arm64/mpam: Split header files into suitable location
arm64/mpam: resctrl: Add rmid file in resctrl sysfs
arm64/mpam: Filter schema control type with ctrl features
arm64/mpam: Simplify mpamid cdp mapping process
arm64/mpam: Set per-cpu's closid to none zero for cdp
ACPI/MPAM: Use acpi_map_pxm_to_node() to get node id for memory node
arm64/mpam: Supplement additional useful ctrl features for mount
options
arm64/mpam: resctrl: Add proper error handling to resctrl_mount()
arm64/mpam: resctrl: Use resctrl_group_init_alloc() for default group
arm64/mpam: resctrl: Allow setting register MPAMCFG_MBW_MIN to 0
arm64/mpam: resctrl: Refresh cpu mask for handling cpuhp
arm64/mpam: Sort domains when cpu online
arm64/mpam: Fix compile warning
arch/arm64/include/asm/mpam.h | 324 +---
arch/arm64/include/asm/mpam_resource.h | 129 --
arch/arm64/include/asm/mpam_sched.h | 8 -
arch/arm64/include/asm/resctrl.h | 514 +++++-
arch/arm64/kernel/Makefile | 2 +-
arch/arm64/kernel/mpam.c | 1499 ----------------
arch/arm64/kernel/mpam/Makefile | 3 +
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 961 ++++++++++
arch/arm64/kernel/mpam/mpam_device.c | 1706 ++++++++++++++++++
arch/arm64/kernel/mpam/mpam_device.h | 140 ++
arch/arm64/kernel/mpam/mpam_internal.h | 345 ++++
arch/arm64/kernel/mpam/mpam_mon.c | 334 ++++
arch/arm64/kernel/mpam/mpam_resctrl.c | 2240 ++++++++++++++++++++++++
arch/arm64/kernel/mpam/mpam_resource.h | 228 +++
arch/arm64/kernel/mpam/mpam_setup.c | 608 +++++++
arch/arm64/kernel/mpam_ctrlmon.c | 623 -------
arch/arm64/kernel/mpam_mon.c | 124 --
drivers/acpi/arm64/mpam.c | 87 +-
drivers/base/cacheinfo.c | 38 +
fs/resctrlfs.c | 396 +++--
include/linux/arm_mpam.h | 118 ++
include/linux/cacheinfo.h | 36 +
include/linux/resctrlfs.h | 30 -
23 files changed, 7521 insertions(+), 2972 deletions(-)
delete mode 100644 arch/arm64/include/asm/mpam_resource.h
delete mode 100644 arch/arm64/kernel/mpam.c
create mode 100644 arch/arm64/kernel/mpam/Makefile
create mode 100644 arch/arm64/kernel/mpam/mpam_ctrlmon.c
create mode 100644 arch/arm64/kernel/mpam/mpam_device.c
create mode 100644 arch/arm64/kernel/mpam/mpam_device.h
create mode 100644 arch/arm64/kernel/mpam/mpam_internal.h
create mode 100644 arch/arm64/kernel/mpam/mpam_mon.c
create mode 100644 arch/arm64/kernel/mpam/mpam_resctrl.c
create mode 100644 arch/arm64/kernel/mpam/mpam_resource.h
create mode 100644 arch/arm64/kernel/mpam/mpam_setup.c
delete mode 100644 arch/arm64/kernel/mpam_ctrlmon.c
delete mode 100644 arch/arm64/kernel/mpam_mon.c
create mode 100644 include/linux/arm_mpam.h
--
2.25.1
1
65

23 Feb '21
From: Li ZhiGang <lizhigang(a)kylinos.cn>
Nationz Tech TCM are used for trusted computing, the chip attached via SPI or LPC.
We have a brief verify/test with this driver on KunPeng920 + openEuler system, with externally compiled module.
Signed-off-by: Li ZhiGang <lizhigang(a)kylinos.cn>
---
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 2 +
drivers/staging/gmjstcm/Kconfig | 21 +
drivers/staging/gmjstcm/Makefile | 5 +
drivers/staging/gmjstcm/tcm.c | 949 ++++++++++++++++++++++++++
drivers/staging/gmjstcm/tcm.h | 123 ++++
drivers/staging/gmjstcm/tcm_tis_spi.c | 868 +++++++++++++++++++++++
7 files changed, 1970 insertions(+)
create mode 100644 drivers/staging/gmjstcm/Kconfig
create mode 100644 drivers/staging/gmjstcm/Makefile
create mode 100644 drivers/staging/gmjstcm/tcm.c
create mode 100644 drivers/staging/gmjstcm/tcm.h
create mode 100644 drivers/staging/gmjstcm/tcm_tis_spi.c
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index 1abf76be2aa8..d51fa4f4e7ca 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -126,4 +126,6 @@ source "drivers/staging/axis-fifo/Kconfig"
source "drivers/staging/erofs/Kconfig"
+source "drivers/staging/gmjstcm/Kconfig"
+
endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index ab0cbe8815b1..6d41915dad5b 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -53,3 +53,5 @@ obj-$(CONFIG_SOC_MT7621) += mt7621-dts/
obj-$(CONFIG_STAGING_GASKET_FRAMEWORK) += gasket/
obj-$(CONFIG_XIL_AXIS_FIFO) += axis-fifo/
obj-$(CONFIG_EROFS_FS) += erofs/
+obj-$(CONFIG_GMJS_TCM) += gmjstcm/
+
diff --git a/drivers/staging/gmjstcm/Kconfig b/drivers/staging/gmjstcm/Kconfig
new file mode 100644
index 000000000000..5b5397ae1832
--- /dev/null
+++ b/drivers/staging/gmjstcm/Kconfig
@@ -0,0 +1,21 @@
+menu "GMJS TCM support"
+
+config GMJS_TCM
+ bool
+
+config GMJS_TCM_CORE
+ tristate "GMJS TCM core support"
+ depends on ARM64 || MIPS
+ default m
+ select GMJS_TCM
+ help
+ GMJS TCM core support.
+
+config GMJS_TCM_SPI
+ tristate "GMJS TCM support on SPI interface"
+ depends on GMJS_TCM_CORE && SPI_MASTER
+ default m
+ help
+ GMJS TCM support on SPI interface.
+
+endmenu
diff --git a/drivers/staging/gmjstcm/Makefile b/drivers/staging/gmjstcm/Makefile
new file mode 100644
index 000000000000..601c78e44793
--- /dev/null
+++ b/drivers/staging/gmjstcm/Makefile
@@ -0,0 +1,5 @@
+
+obj-$(CONFIG_GMJS_TCM_CORE) += tcm_core.o
+tcm_core-objs := tcm.o
+obj-$(CONFIG_GMJS_TCM_SPI) += tcm_tis_spi.o
+
diff --git a/drivers/staging/gmjstcm/tcm.c b/drivers/staging/gmjstcm/tcm.c
new file mode 100644
index 000000000000..5c41bfa8b423
--- /dev/null
+++ b/drivers/staging/gmjstcm/tcm.c
@@ -0,0 +1,949 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2009 Nationz Technologies Inc.
+ *
+ * Description: Exprot symbol for tcm_tis module
+ *
+ * Major Function: public write read register function etc.
+ *
+ */
+
+#include <linux/sched.h>
+#include <linux/poll.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include "tcm.h"
+
+/*
+ * const var
+ */
+enum tcm_const {
+ TCM_MINOR = 224, /* officially assigned */
+ TCM_BUFSIZE = 2048, /* Buffer Size */
+ TCM_NUM_DEVICES = 256, /* Max supporting tcm device number */
+};
+
+/*
+ * CMD duration
+ */
+enum tcm_duration {
+ TCM_SHORT = 0,
+ TCM_MEDIUM = 1,
+ TCM_LONG = 2,
+ TCM_UNDEFINED,
+};
+
+/* Max Total of Command Number */
+#define TCM_MAX_ORDINAL 88 /*243*/
+
+static LIST_HEAD(tcm_chip_list);
+static DEFINE_SPINLOCK(driver_lock); /* spin lock */
+static DECLARE_BITMAP(dev_mask, TCM_NUM_DEVICES);
+
+typedef struct tagTCM_Command {
+ u8 ordinal;
+ u8 DURATION;
+} TCM_Command;
+
+static const TCM_Command TCM_Command_List[TCM_MAX_ORDINAL + 1] = {
+ {/*TCM_ORD_ActivateIdentity, */122, 1},
+ {/*TCM_ORD_CertifyKey, */50, 1},
+ {/*TCM_ORD_CertifyKeyM, */51, 1},
+ {/*TCM_ORD_ChangeAuth, */12, 1},
+ {/*TCM_ORD_ChangeAuthOwner, */16, 0},
+ {/*TCM_ORD_ContinueSelfTeSt, */83, 2},
+ {/*TCM_ORD_CreateCounter, */220, 0},
+ {/*TCM_ORD_CreateWrapKey, */31, 2},
+ {/*TCM_ORD_DiSableForceClear, */94, 0},
+ {/*TCM_ORD_DiSableOwnerClear, */92, 0},
+ {/*TCM_ORD_EStabliShTranSport, */230, 0},
+ {/*TCM_ORD_ExecuteTranSport, */231, 2},
+ {/*TCM_ORD_Extend, */20, 0},
+ {/*TCM_ORD_FieldUpgrade, */170, 2},
+ {/*TCM_ORD_FluShSpecific, */186, 0},
+ {/*TCM_ORD_ForceClear, */93, 0},
+ {/*TCM_ORD_GetAuditDigeSt, */133, 0},
+ {/*TCM_ORD_GetAuditDigeStSigned, */134, 1},
+ {/*TCM_ORD_GetCapability, */101, 0},
+ {/*TCM_ORD_GetPubKey, */33, 0},
+ {/*TCM_ORD_GetRandoM, */70, 0},
+ {/*TCM_ORD_GetTeStReSult, */84, 0},
+ {/*TCM_ORD_GetTickS, */241, 0},
+ {/*TCM_ORD_IncreMentCounter, */221, 0},
+ {/*TCM_ORD_LoadContext, */185, 1},
+ {/*TCM_ORD_MakeIdentity, */121, 2},
+ {/*TCM_ORD_NV_DefineSpace, */204, 0},
+ {/*TCM_ORD_NV_ReadValue, */207, 0},
+ {/*TCM_ORD_NV_ReadValueAuth, */208, 0},
+ {/*TCM_ORD_NV_WriteValue, */205, 0},
+ {/*TCM_ORD_NV_WriteValueAuth, */206, 0},
+ {/*TCM_ORD_OwnerClear, */91, 0},
+ {/*TCM_ORD_OwnerReadInternalPub, */129, 0},
+ {/*TCM_ORD_OwnerSetDiSable, */110, 0},
+ {/*TCM_ORD_PCR_ReSet, */200, 0},
+ {/*TCM_ORD_PcrRead, */21, 0},
+ {/*TCM_ORD_PhySicalDiSable, */112, 0},
+ {/*TCM_ORD_PhySicalEnable, */111, 0},
+ {/*TCM_ORD_PhySicalSetDeactivated, */114, 0},
+ {/*TCM_ORD_Quote, */22, 1},
+ {/*TCM_ORD_QuoteM, */62, 1},
+ {/*TCM_ORD_ReadCounter, */222, 0},
+ {/*TCM_ORD_ReadPubek, */124, 0},
+ {/*TCM_ORD_ReleaSeCounter, */223, 0},
+ {/*TCM_ORD_ReleaSeCounterOwner, */224, 0},
+ {/*TCM_ORD_ReleaSeTranSportSigned, */232, 1},
+ {/*TCM_ORD_ReSetLockValue, */64, 0},
+ {/*TCM_ORD_RevokeTruSt, */128, 0},
+ {/*TCM_ORD_SaveContext, */184, 1},
+ {/*TCM_ORD_SaveState, */152, 1},
+ {/*TCM_ORD_Seal, */23, 1},
+ {/*TCM_ORD_Sealx, */61, 1},
+ {/*TCM_ORD_SelfTeStFull, */80, 2},
+ {/*TCM_ORD_SetCapability, */63, 0},
+ {/*TCM_ORD_SetOperatorAuth, */116, 0},
+ {/*TCM_ORD_SetOrdinalAuditStatuS, */141, 0},
+ {/*TCM_ORD_SetOwnerInStall, */113, 0},
+ {/*TCM_ORD_SetTeMpDeactivated, */115, 0},
+ {/*TCM_ORD_Sign, */60, 1},
+ {/*TCM_ORD_Startup, */153, 0},
+ {/*TCM_ORD_TakeOwnerShip, */13, 1},
+ {/*TCM_ORD_TickStaMpBlob, */242, 1},
+ {/*TCM_ORD_UnSeal, */24, 1},
+ {/*TSC_ORD_PhySicalPreSence, */10, 0},
+ {/*TSC_ORD_ReSetEStabliShMentBit, */11, 0},
+ {/*TCM_ORD_WrapKey, */189, 2},
+ {/*TCM_ORD_APcreate, */191, 0},
+ {/*TCM_ORD_APTerMinate, */192, 0},
+ {/*TCM_ORD_CreateMigratedBlob, */193, 1},
+ {/*TCM_ORD_ConvertMigratedBlob, */194, 1},
+ {/*TCM_ORD_AuthorizeMigrationKey, */195, 0},
+ {/*TCM_ORD_SMS4Encrypt, */197, 1},
+ {/*TCM_ORD_SMS4Decrypt, */198, 1},
+ {/*TCM_ORD_ReadEKCert, */199, 1},
+ {/*TCM_ORD_WriteEKCert, */233, 1},
+ {/*TCM_ORD_SCHStart, */234, 0},
+ {/*TCM_ORD_SCHUpdata, */235, 0},
+ {/*TCM_ORD_SCHCoMplete, */236, 0},
+ {/*TCM_ORD_SCHCoMpleteExtend, */237, 0},
+ {/*TCM_ORD_ECCDecrypt, */238, 1},
+ {/*TCM_ORD_LoadKey, */239, 1},
+ {/*TCM_ORD_CreateEndorSeMentKeyPair, */120, 2},
+ {/*TCM_ORD_CreateRevocableEK, */127, 2},
+ {/*TCM_ORD_ReleaSeECCExchangeSeSSion, */174, 1},
+ {/*TCM_ORD_CreateECCExchangeSeSSion, */175, 1},
+ {/*TCM_ORD_GetKeyECCExchangeSeSSion, */176, 1},
+ {/*TCM_ORD_ActivatePEK, */217, 1},
+ {/*TCM_ORD_ActivatePEKCert, */218, 1},
+ {0, 0}
+};
+
+static void user_reader_timeout(struct timer_list *t)
+{
+ struct tcm_chip *chip = from_timer(chip, t, user_read_timer);
+
+ schedule_work(&chip->work);
+}
+
+static void timeout_work(struct work_struct *work)
+{
+ struct tcm_chip *chip = container_of(work, struct tcm_chip, work);
+
+ mutex_lock(&chip->buffer_mutex);
+ atomic_set(&chip->data_pending, 0);
+ memset(chip->data_buffer, 0, TCM_BUFSIZE);
+ mutex_unlock(&chip->buffer_mutex);
+}
+
+unsigned long tcm_calc_ordinal_duration(struct tcm_chip *chip,
+ u32 ordinal)
+{
+ int duration_idx = TCM_UNDEFINED;
+ int duration = 0;
+ int i = 0;
+
+ for (i = 0; i < TCM_MAX_ORDINAL; i++) {
+ if (ordinal == TCM_Command_List[i].ordinal) {
+ duration_idx = TCM_Command_List[i].DURATION;
+ break;
+ }
+ }
+
+ if (duration_idx != TCM_UNDEFINED)
+ duration = chip->vendor.duration[duration_idx];
+ if (duration <= 0)
+ return 2 * 60 * HZ;
+ else
+ return duration;
+}
+EXPORT_SYMBOL_GPL(tcm_calc_ordinal_duration);
+
+/*
+ * Internal kernel interface to transmit TCM commands
+ * buff format: TAG(2 bytes) + Total Size(4 bytes ) +
+ * Command Ordinal(4 bytes ) + ......
+ */
+static ssize_t tcm_transmit(struct tcm_chip *chip, const char *buf,
+ size_t bufsiz)
+{
+ ssize_t rc = 0;
+ u32 count = 0, ordinal = 0;
+ unsigned long stop = 0;
+
+ count = be32_to_cpu(*((__be32 *)(buf + 2))); /* buff size */
+ ordinal = be32_to_cpu(*((__be32 *)(buf + 6))); /* command ordinal */
+
+ if (count == 0)
+ return -ENODATA;
+ if (count > bufsiz) { /* buff size err ,invalid buff stream */
+ dev_err(chip->dev, "invalid count value %x, %zx\n",
+ count, bufsiz);
+ return -E2BIG;
+ }
+
+ mutex_lock(&chip->tcm_mutex); /* enter mutex */
+
+ rc = chip->vendor.send(chip, (u8 *)buf, count);
+ if (rc < 0) {
+ dev_err(chip->dev, "%s: tcm_send: error %zd\n",
+ __func__, rc);
+ goto out;
+ }
+
+ if (chip->vendor.irq)
+ goto out_recv;
+
+ stop = jiffies + tcm_calc_ordinal_duration(chip,
+ ordinal); /* cmd duration */
+ do {
+ u8 status = chip->vendor.status(chip);
+
+ if ((status & chip->vendor.req_complete_mask) ==
+ chip->vendor.req_complete_val)
+ goto out_recv;
+
+ if ((status == chip->vendor.req_canceled)) {
+ dev_err(chip->dev, "Operation Canceled\n");
+ rc = -ECANCELED;
+ goto out;
+ }
+
+ msleep(TCM_TIMEOUT); /* CHECK */
+ rmb();
+ } while (time_before(jiffies, stop));
+ /* time out */
+ chip->vendor.cancel(chip);
+ dev_err(chip->dev, "Operation Timed out\n");
+ rc = -ETIME;
+ goto out;
+
+out_recv:
+ rc = chip->vendor.recv(chip, (u8 *)buf, bufsiz);
+ if (rc < 0)
+ dev_err(chip->dev, "%s: tcm_recv: error %zd\n",
+ __func__, rc);
+out:
+ mutex_unlock(&chip->tcm_mutex);
+ return rc;
+}
+
+#define TCM_DIGEST_SIZE 32
+#define TCM_ERROR_SIZE 10
+#define TCM_RET_CODE_IDX 6
+#define TCM_GET_CAP_RET_SIZE_IDX 10
+#define TCM_GET_CAP_RET_UINT32_1_IDX 14
+#define TCM_GET_CAP_RET_UINT32_2_IDX 18
+#define TCM_GET_CAP_RET_UINT32_3_IDX 22
+#define TCM_GET_CAP_RET_UINT32_4_IDX 26
+#define TCM_GET_CAP_PERM_DISABLE_IDX 16
+#define TCM_GET_CAP_PERM_INACTIVE_IDX 18
+#define TCM_GET_CAP_RET_BOOL_1_IDX 14
+#define TCM_GET_CAP_TEMP_INACTIVE_IDX 16
+
+#define TCM_CAP_IDX 13
+#define TCM_CAP_SUBCAP_IDX 21
+
+enum tcm_capabilities {
+ TCM_CAP_FLAG = 4,
+ TCM_CAP_PROP = 5,
+};
+
+enum tcm_sub_capabilities {
+ TCM_CAP_PROP_PCR = 0x1, /* tcm 0x101 */
+ TCM_CAP_PROP_MANUFACTURER = 0x3, /* tcm 0x103 */
+ TCM_CAP_FLAG_PERM = 0x8, /* tcm 0x108 */
+ TCM_CAP_FLAG_VOL = 0x9, /* tcm 0x109 */
+ TCM_CAP_PROP_OWNER = 0x11, /* tcm 0x101 */
+ TCM_CAP_PROP_TIS_TIMEOUT = 0x15, /* tcm 0x115 */
+ TCM_CAP_PROP_TIS_DURATION = 0x20, /* tcm 0x120 */
+};
+
+/*
+ * This is a semi generic GetCapability command for use
+ * with the capability type TCM_CAP_PROP or TCM_CAP_FLAG
+ * and their associated sub_capabilities.
+ */
+
+static const u8 tcm_cap[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND 0xc1*/
+ 0, 0, 0, 22, /* length */
+ 0, 0, 128, 101, /* TCM_ORD_GetCapability */
+ 0, 0, 0, 0, /* TCM_CAP_<TYPE> */
+ 0, 0, 0, 4, /* TCM_CAP_SUB_<TYPE> size */
+ 0, 0, 1, 0 /* TCM_CAP_SUB_<TYPE> */
+};
+
+static ssize_t transmit_cmd(struct tcm_chip *chip, u8 *data, int len,
+ char *desc)
+{
+ int err = 0;
+
+ len = tcm_transmit(chip, data, len);
+ if (len < 0)
+ return len;
+ if (len == TCM_ERROR_SIZE) {
+ err = be32_to_cpu(*((__be32 *)(data + TCM_RET_CODE_IDX)));
+ dev_dbg(chip->dev, "A TCM error (%d) occurred %s\n", err, desc);
+ return err;
+ }
+ return 0;
+}
+
+/*
+ * Get default timeouts value form tcm by GetCapability with TCM_CAP_PROP_TIS_TIMEOUT prop
+ */
+void tcm_get_timeouts(struct tcm_chip *chip)
+{
+ u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 30)];
+ ssize_t rc = 0;
+ u32 timeout = 0;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_PROP;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_TIS_TIMEOUT;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the timeouts");
+ if (rc)
+ goto duration;
+
+ if (be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_SIZE_IDX))) !=
+ 4 * sizeof(u32))
+ goto duration;
+
+ /* Don't overwrite default if value is 0 */
+ timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_1_IDX)));
+ if (timeout)
+ chip->vendor.timeout_a = msecs_to_jiffies(timeout);
+ timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_2_IDX)));
+ if (timeout)
+ chip->vendor.timeout_b = msecs_to_jiffies(timeout);
+ timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_3_IDX)));
+ if (timeout)
+ chip->vendor.timeout_c = msecs_to_jiffies(timeout);
+ timeout = be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_4_IDX)));
+ if (timeout)
+ chip->vendor.timeout_d = msecs_to_jiffies(timeout);
+
+duration:
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_PROP;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_TIS_DURATION;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the durations");
+ if (rc)
+ return;
+
+ if (be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_SIZE_IDX))) !=
+ 3 * sizeof(u32))
+ return;
+
+ chip->vendor.duration[TCM_SHORT] =
+ msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data +
+ TCM_GET_CAP_RET_UINT32_1_IDX))));
+ chip->vendor.duration[TCM_MEDIUM] =
+ msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data +
+ TCM_GET_CAP_RET_UINT32_2_IDX))));
+ chip->vendor.duration[TCM_LONG] =
+ msecs_to_jiffies(be32_to_cpu(*((__be32 *)(data +
+ TCM_GET_CAP_RET_UINT32_3_IDX))));
+}
+EXPORT_SYMBOL_GPL(tcm_get_timeouts);
+
+ssize_t tcm_show_enabled(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 35)];
+ ssize_t rc = 0;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_FLAG;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_PERM;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attemtping to determine the permanent state");
+ if (rc)
+ return 0;
+ if (data[TCM_GET_CAP_PERM_DISABLE_IDX])
+ return sprintf(buf, "disable\n");
+ else
+ return sprintf(buf, "enable\n");
+}
+EXPORT_SYMBOL_GPL(tcm_show_enabled);
+
+ssize_t tcm_show_active(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[max_t(int, ARRAY_SIZE(tcm_cap), 35)];
+ ssize_t rc = 0;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_FLAG;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_PERM;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attemtping to determine the permanent state");
+ if (rc)
+ return 0;
+ if (data[TCM_GET_CAP_PERM_INACTIVE_IDX])
+ return sprintf(buf, "deactivated\n");
+ else
+ return sprintf(buf, "activated\n");
+}
+EXPORT_SYMBOL_GPL(tcm_show_active);
+
+ssize_t tcm_show_owned(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[sizeof(tcm_cap)];
+ ssize_t rc = 0;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_PROP;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_OWNER;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the owner state");
+ if (rc)
+ return 0;
+ if (data[TCM_GET_CAP_RET_BOOL_1_IDX])
+ return sprintf(buf, "Owner installed\n");
+ else
+ return sprintf(buf, "Owner have not installed\n");
+}
+EXPORT_SYMBOL_GPL(tcm_show_owned);
+
+ssize_t tcm_show_temp_deactivated(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 data[sizeof(tcm_cap)];
+ ssize_t rc = 0;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_FLAG;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_FLAG_VOL;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the temporary state");
+ if (rc)
+ return 0;
+ if (data[TCM_GET_CAP_TEMP_INACTIVE_IDX])
+ return sprintf(buf, "Temp deactivated\n");
+ else
+ return sprintf(buf, "activated\n");
+}
+EXPORT_SYMBOL_GPL(tcm_show_temp_deactivated);
+
+static const u8 pcrread[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 14, /* length */
+ 0, 0, 128, 21, /* TCM_ORD_PcrRead */
+ 0, 0, 0, 0 /* PCR index */
+};
+
+ssize_t tcm_show_pcrs(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[1024];
+ ssize_t rc = 0;
+ int i = 0, j = 0, num_pcrs = 0;
+ __be32 index = 0;
+ char *str = buf;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_PROP;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_PCR;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the number of PCRS");
+ if (rc)
+ return 0;
+
+ num_pcrs = be32_to_cpu(*((__be32 *)(data + 14)));
+ for (i = 0; i < num_pcrs; i++) {
+ memcpy(data, pcrread, sizeof(pcrread));
+ index = cpu_to_be32(i);
+ memcpy(data + 10, &index, 4);
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to read a PCR");
+ if (rc)
+ goto out;
+ str += sprintf(str, "PCR-%02d: ", i);
+ for (j = 0; j < TCM_DIGEST_SIZE; j++)
+ str += sprintf(str, "%02X ", *(data + 10 + j));
+ str += sprintf(str, "\n");
+ memset(data, 0, 1024);
+ }
+out:
+ return str - buf;
+}
+EXPORT_SYMBOL_GPL(tcm_show_pcrs);
+
+#define READ_PUBEK_RESULT_SIZE 128
+static const u8 readpubek[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 42, /* length */
+ 0, 0, 128, 124, /* TCM_ORD_ReadPubek */
+ 0, 0, 0, 0, 0, 0, 0, 0, /* NONCE */
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0
+};
+
+ssize_t tcm_show_pubek(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[READ_PUBEK_RESULT_SIZE] = {0};
+ ssize_t err = 0;
+ int i = 0, rc = 0;
+ char *str = buf;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, readpubek, sizeof(readpubek));
+
+ err = transmit_cmd(chip, data, sizeof(data),
+ "attempting to read the PUBEK");
+ if (err)
+ goto out;
+
+ str += sprintf(str, "PUBEK:");
+ for (i = 0 ; i < 65 ; i++) {
+ if ((i) % 16 == 0)
+ str += sprintf(str, "\n");
+ str += sprintf(str, "%02X ", data[i+10]);
+ }
+
+ str += sprintf(str, "\n");
+out:
+ rc = str - buf;
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tcm_show_pubek);
+
+#define CAP_VERSION_1_1 6
+#define CAP_VERSION_1_2 0x1A
+#define CAP_VERSION_IDX 13
+static const u8 cap_version[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 18, /* length */
+ 0, 0, 128, 101, /* TCM_ORD_GetCapability */
+ 0, 0, 0, 0,
+ 0, 0, 0, 0
+};
+
+ssize_t tcm_show_caps(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u8 data[max_t(int, max(ARRAY_SIZE(tcm_cap), ARRAY_SIZE(cap_version)), 30)];
+ ssize_t rc = 0;
+ char *str = buf;
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ memcpy(data, tcm_cap, sizeof(tcm_cap));
+ data[TCM_CAP_IDX] = TCM_CAP_PROP;
+ data[TCM_CAP_SUBCAP_IDX] = TCM_CAP_PROP_MANUFACTURER;
+
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the manufacturer");
+ if (rc)
+ return 0;
+
+ str += sprintf(str, "Manufacturer: 0x%x\n",
+ be32_to_cpu(*((__be32 *)(data + TCM_GET_CAP_RET_UINT32_1_IDX))));
+
+ memcpy(data, cap_version, sizeof(cap_version));
+ data[CAP_VERSION_IDX] = CAP_VERSION_1_1;
+ rc = transmit_cmd(chip, data, sizeof(data),
+ "attempting to determine the 1.1 version");
+ if (rc)
+ goto out;
+
+ str += sprintf(str, "Firmware version: %02X.%02X.%02X.%02X\n",
+ (int)data[14], (int)data[15], (int)data[16],
+ (int)data[17]);
+
+out:
+ return str - buf;
+}
+EXPORT_SYMBOL_GPL(tcm_show_caps);
+
+ssize_t tcm_store_cancel(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return 0;
+
+ chip->vendor.cancel(chip);
+ return count;
+}
+EXPORT_SYMBOL_GPL(tcm_store_cancel);
+
+/*
+ * Device file system interface to the TCM
+ * when App call file open in usr space ,this func will respone
+ */
+int tcm_open(struct inode *inode, struct file *file)
+{
+ int rc = 0, minor = iminor(inode);
+ struct tcm_chip *chip = NULL, *pos = NULL;
+
+ spin_lock(&driver_lock);
+
+ list_for_each_entry(pos, &tcm_chip_list, list) {
+ if (pos->vendor.miscdev.minor == minor) {
+ chip = pos;
+ break;
+ }
+ }
+
+ if (chip == NULL) {
+ rc = -ENODEV;
+ goto err_out;
+ }
+
+ if (chip->num_opens) {
+ dev_dbg(chip->dev, "Another process owns this TCM\n");
+ rc = -EBUSY;
+ goto err_out;
+ }
+
+ chip->num_opens++;
+ get_device(chip->dev);
+
+ spin_unlock(&driver_lock);
+
+ chip->data_buffer = kmalloc(TCM_BUFSIZE * sizeof(u8), GFP_KERNEL);
+ if (chip->data_buffer == NULL) {
+ chip->num_opens--;
+ put_device(chip->dev);
+ return -ENOMEM;
+ }
+
+ atomic_set(&chip->data_pending, 0);
+
+ file->private_data = chip;
+ return 0;
+
+err_out:
+ spin_unlock(&driver_lock);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tcm_open);
+
+int tcm_release(struct inode *inode, struct file *file)
+{
+ struct tcm_chip *chip = file->private_data;
+
+ spin_lock(&driver_lock);
+ file->private_data = NULL;
+ chip->num_opens--;
+ del_singleshot_timer_sync(&chip->user_read_timer);
+ flush_work(&chip->work);
+ atomic_set(&chip->data_pending, 0);
+ put_device(chip->dev);
+ kfree(chip->data_buffer);
+ spin_unlock(&driver_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tcm_release);
+
+ssize_t tcm_write(struct file *file, const char __user *buf,
+ size_t size, loff_t *off)
+{
+ struct tcm_chip *chip = file->private_data;
+ int in_size = size, out_size;
+
+ /*
+ * cannot perform a write until the read has cleared
+ * either via tcm_read or a user_read_timer timeout
+ */
+ while (atomic_read(&chip->data_pending) != 0)
+ msleep(TCM_TIMEOUT);
+
+ mutex_lock(&chip->buffer_mutex);
+
+ if (in_size > TCM_BUFSIZE)
+ in_size = TCM_BUFSIZE;
+
+ if (copy_from_user(chip->data_buffer, (void __user *)buf, in_size)) {
+ mutex_unlock(&chip->buffer_mutex);
+ return -EFAULT;
+ }
+
+ /* atomic tcm command send and result receive */
+ out_size = tcm_transmit(chip, chip->data_buffer, TCM_BUFSIZE);
+
+ if (out_size >= 0) {
+ atomic_set(&chip->data_pending, out_size);
+ mutex_unlock(&chip->buffer_mutex);
+
+ /* Set a timeout by which the reader must come claim the result */
+ mod_timer(&chip->user_read_timer, jiffies + (60 * HZ));
+ } else
+ mutex_unlock(&chip->buffer_mutex);
+
+ return in_size;
+}
+EXPORT_SYMBOL_GPL(tcm_write);
+
+ssize_t tcm_read(struct file *file, char __user *buf,
+ size_t size, loff_t *off)
+{
+ struct tcm_chip *chip = file->private_data;
+ int ret_size = 0;
+
+ del_singleshot_timer_sync(&chip->user_read_timer);
+ flush_work(&chip->work);
+ ret_size = atomic_read(&chip->data_pending);
+ atomic_set(&chip->data_pending, 0);
+ if (ret_size > 0) { /* relay data */
+ if (size < ret_size)
+ ret_size = size;
+
+ mutex_lock(&chip->buffer_mutex);
+ if (copy_to_user(buf, chip->data_buffer, ret_size))
+ ret_size = -EFAULT;
+ mutex_unlock(&chip->buffer_mutex);
+ }
+
+ return ret_size;
+}
+EXPORT_SYMBOL_GPL(tcm_read);
+
+void tcm_remove_hardware(struct device *dev)
+{
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL) {
+ dev_err(dev, "No device data found\n");
+ return;
+ }
+
+ spin_lock(&driver_lock);
+ list_del(&chip->list);
+ spin_unlock(&driver_lock);
+
+ dev_set_drvdata(dev, NULL);
+ misc_deregister(&chip->vendor.miscdev);
+ kfree(chip->vendor.miscdev.name);
+
+ sysfs_remove_group(&dev->kobj, chip->vendor.attr_group);
+ /* tcm_bios_log_teardown(chip->bios_dir); */
+
+ clear_bit(chip->dev_num, dev_mask);
+ kfree(chip);
+ put_device(dev);
+}
+EXPORT_SYMBOL_GPL(tcm_remove_hardware);
+
+static u8 savestate[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 10, /* blob length (in bytes) */
+ 0, 0, 128, 152 /* TCM_ORD_SaveState */
+};
+
+/*
+ * We are about to suspend. Save the TCM state
+ * so that it can be restored.
+ */
+int tcm_pm_suspend(struct device *dev, pm_message_t pm_state)
+{
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ tcm_transmit(chip, savestate, sizeof(savestate));
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tcm_pm_suspend);
+
+int tcm_pm_suspend_p(struct device *dev)
+{
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+
+ if (chip == NULL)
+ return -ENODEV;
+
+ tcm_transmit(chip, savestate, sizeof(savestate));
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tcm_pm_suspend_p);
+
+void tcm_startup(struct tcm_chip *chip)
+{
+ u8 start_up[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 12, /* blob length (in bytes) */
+ 0, 0, 128, 153, /* TCM_ORD_SaveState */
+ 0, 1
+ };
+ if (chip == NULL)
+ return;
+ tcm_transmit(chip, start_up, sizeof(start_up));
+}
+EXPORT_SYMBOL_GPL(tcm_startup);
+
+/*
+ * Resume from a power safe. The BIOS already restored
+ * the TCM state.
+ */
+int tcm_pm_resume(struct device *dev)
+{
+ u8 start_up[] = {
+ 0, 193, /* TCM_TAG_RQU_COMMAND */
+ 0, 0, 0, 12, /* blob length (in bytes) */
+ 0, 0, 128, 153, /* TCM_ORD_SaveState */
+ 0, 1
+ };
+ struct tcm_chip *chip = dev_get_drvdata(dev);
+ /* dev_info(chip->dev ,"--call tcm_pm_resume\n"); */
+ if (chip == NULL)
+ return -ENODEV;
+
+ tcm_transmit(chip, start_up, sizeof(start_up));
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tcm_pm_resume);
+
+/*
+ * Called from tcm_<specific>.c probe function only for devices
+ * the driver has determined it should claim. Prior to calling
+ * this function the specific probe function has called pci_enable_device
+ * upon errant exit from this function specific probe function should call
+ * pci_disable_device
+ */
+struct tcm_chip *tcm_register_hardware(struct device *dev,
+ const struct tcm_vendor_specific *entry)
+{
+ int rc;
+#define DEVNAME_SIZE 7
+
+ char *devname = NULL;
+ struct tcm_chip *chip = NULL;
+
+ /* Driver specific per-device data */
+ chip = kzalloc(sizeof(*chip), GFP_KERNEL);
+ if (chip == NULL) {
+ dev_err(dev, "chip kzalloc err\n");
+ return NULL;
+ }
+
+ mutex_init(&chip->buffer_mutex);
+ mutex_init(&chip->tcm_mutex);
+ INIT_LIST_HEAD(&chip->list);
+
+ INIT_WORK(&chip->work, timeout_work);
+ timer_setup(&chip->user_read_timer, user_reader_timeout, 0);
+
+ memcpy(&chip->vendor, entry, sizeof(struct tcm_vendor_specific));
+
+ chip->dev_num = find_first_zero_bit(dev_mask, TCM_NUM_DEVICES);
+
+ if (chip->dev_num >= TCM_NUM_DEVICES) {
+ dev_err(dev, "No available tcm device numbers\n");
+ kfree(chip);
+ return NULL;
+ } else if (chip->dev_num == 0)
+ chip->vendor.miscdev.minor = TCM_MINOR;
+ else
+ chip->vendor.miscdev.minor = MISC_DYNAMIC_MINOR;
+
+ set_bit(chip->dev_num, dev_mask);
+
+ devname = kmalloc(DEVNAME_SIZE, GFP_KERNEL);
+ scnprintf(devname, DEVNAME_SIZE, "%s%d", "tcm", chip->dev_num);
+ chip->vendor.miscdev.name = devname;
+
+ /* chip->vendor.miscdev.dev = dev; */
+
+ chip->dev = get_device(dev);
+
+ if (misc_register(&chip->vendor.miscdev)) {
+ dev_err(chip->dev,
+ "unable to misc_register %s, minor %d\n",
+ chip->vendor.miscdev.name,
+ chip->vendor.miscdev.minor);
+ put_device(dev);
+ clear_bit(chip->dev_num, dev_mask);
+ kfree(chip);
+ kfree(devname);
+ return NULL;
+ }
+
+ spin_lock(&driver_lock);
+ dev_set_drvdata(dev, chip);
+ list_add(&chip->list, &tcm_chip_list);
+ spin_unlock(&driver_lock);
+
+ rc = sysfs_create_group(&dev->kobj, chip->vendor.attr_group);
+ /* chip->bios_dir = tcm_bios_log_setup(devname); */
+
+ return chip;
+}
+EXPORT_SYMBOL_GPL(tcm_register_hardware);
+
+static int __init tcm_init_module(void)
+{
+ return 0;
+}
+
+static void __exit tcm_exit_module(void)
+{
+}
+
+module_init(tcm_init_module);
+module_exit(tcm_exit_module);
+
+MODULE_AUTHOR("Nationz Technologies Inc.");
+MODULE_DESCRIPTION("TCM Driver");
+MODULE_VERSION("1.1.1.0");
+MODULE_LICENSE("GPL");
diff --git a/drivers/staging/gmjstcm/tcm.h b/drivers/staging/gmjstcm/tcm.h
new file mode 100644
index 000000000000..b8cafe78d590
--- /dev/null
+++ b/drivers/staging/gmjstcm/tcm.h
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2009 Nationz Technologies Inc.
+ *
+ */
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/platform_device.h>
+#include <linux/io.h>
+
+struct device;
+struct tcm_chip;
+
+enum tcm_timeout {
+ TCM_TIMEOUT = 5,
+};
+
+/* TCM addresses */
+enum tcm_addr {
+ TCM_SUPERIO_ADDR = 0x2E,
+ TCM_ADDR = 0x4E,
+};
+
+extern ssize_t tcm_show_pubek(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_show_pcrs(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_show_caps(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_store_cancel(struct device *, struct device_attribute *attr,
+ const char *, size_t);
+extern ssize_t tcm_show_enabled(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_show_active(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_show_owned(struct device *, struct device_attribute *attr,
+ char *);
+extern ssize_t tcm_show_temp_deactivated(struct device *,
+ struct device_attribute *attr, char *);
+
+struct tcm_vendor_specific {
+ const u8 req_complete_mask;
+ const u8 req_complete_val;
+ const u8 req_canceled;
+ void __iomem *iobase; /* ioremapped address */
+ void __iomem *iolbc;
+ unsigned long base; /* TCM base address */
+
+ int irq;
+
+ int region_size;
+ int have_region;
+
+ int (*recv)(struct tcm_chip *, u8 *, size_t);
+ int (*send)(struct tcm_chip *, u8 *, size_t);
+ void (*cancel)(struct tcm_chip *);
+ u8 (*status)(struct tcm_chip *);
+ struct miscdevice miscdev;
+ struct attribute_group *attr_group;
+ struct list_head list;
+ int locality;
+ unsigned long timeout_a, timeout_b, timeout_c, timeout_d; /* jiffies */
+ unsigned long duration[3]; /* jiffies */
+
+ wait_queue_head_t read_queue;
+ wait_queue_head_t int_queue;
+};
+
+struct tcm_chip {
+ struct device *dev; /* Device stuff */
+
+ int dev_num; /* /dev/tcm# */
+ int num_opens; /* only one allowed */
+ int time_expired;
+
+ /* Data passed to and from the tcm via the read/write calls */
+ u8 *data_buffer;
+ atomic_t data_pending;
+ struct mutex buffer_mutex;
+
+ struct timer_list user_read_timer; /* user needs to claim result */
+ struct work_struct work;
+ struct mutex tcm_mutex; /* tcm is processing */
+
+ struct tcm_vendor_specific vendor;
+
+ struct dentry **bios_dir;
+
+ struct list_head list;
+};
+
+#define to_tcm_chip(n) container_of(n, struct tcm_chip, vendor)
+
+static inline int tcm_read_index(int base, int index)
+{
+ outb(index, base);
+ return inb(base+1) & 0xFF;
+}
+
+static inline void tcm_write_index(int base, int index, int value)
+{
+ outb(index, base);
+ outb(value & 0xFF, base+1);
+}
+extern void tcm_startup(struct tcm_chip *);
+extern void tcm_get_timeouts(struct tcm_chip *);
+extern unsigned long tcm_calc_ordinal_duration(struct tcm_chip *, u32);
+extern struct tcm_chip *tcm_register_hardware(struct device *,
+ const struct tcm_vendor_specific *);
+extern int tcm_open(struct inode *, struct file *);
+extern int tcm_release(struct inode *, struct file *);
+extern ssize_t tcm_write(struct file *, const char __user *, size_t,
+ loff_t *);
+extern ssize_t tcm_read(struct file *, char __user *, size_t, loff_t *);
+extern void tcm_remove_hardware(struct device *);
+extern int tcm_pm_suspend(struct device *, pm_message_t);
+extern int tcm_pm_suspend_p(struct device *);
+extern int tcm_pm_resume(struct device *);
+
diff --git a/drivers/staging/gmjstcm/tcm_tis_spi.c b/drivers/staging/gmjstcm/tcm_tis_spi.c
new file mode 100644
index 000000000000..db30a5b4c47d
--- /dev/null
+++ b/drivers/staging/gmjstcm/tcm_tis_spi.c
@@ -0,0 +1,868 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020 Kylin Tech. Co., Ltd.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+#include <linux/acpi.h>
+#include <linux/spi/spi.h>
+
+#include "tcm.h"
+
+#if !defined(CONFIG_KYLINOS_SERVER) && !defined(CONFIG_KYLINOS_DESKTOP)
+static int is_ft_all(void) {
+ return 0;
+}
+#endif
+
+#define TCM_HEADER_SIZE 10
+
+static bool tcm_debug;
+module_param_named(debug, tcm_debug, bool, 0600);
+MODULE_PARM_DESC(debug, "Turn TCM debugging mode on and off");
+
+#define tcm_dbg(fmt, args...) \
+{ \
+ if (tcm_debug) \
+ pr_err(fmt, ## args); \
+}
+
+enum tis_access {
+ TCM_ACCESS_VALID = 0x80,
+ TCM_ACCESS_ACTIVE_LOCALITY = 0x20,
+ TCM_ACCESS_REQUEST_PENDING = 0x04,
+ TCM_ACCESS_REQUEST_USE = 0x02,
+};
+
+enum tis_status {
+ TCM_STS_VALID = 0x80,
+ TCM_STS_COMMAND_READY = 0x40,
+ TCM_STS_GO = 0x20,
+ TCM_STS_DATA_AVAIL = 0x10,
+ TCM_STS_DATA_EXPECT = 0x08,
+};
+
+enum tis_int_flags {
+ TCM_GLOBAL_INT_ENABLE = 0x80000000,
+ TCM_INTF_BURST_COUNT_STATIC = 0x100,
+ TCM_INTF_CMD_READY_INT = 0x080,
+ TCM_INTF_INT_EDGE_FALLING = 0x040,
+ TCM_INTF_INT_EDGE_RISING = 0x020,
+ TCM_INTF_INT_LEVEL_LOW = 0x010,
+ TCM_INTF_INT_LEVEL_HIGH = 0x008,
+ TCM_INTF_LOCALITY_CHANGE_INT = 0x004,
+ TCM_INTF_STS_VALID_INT = 0x002,
+ TCM_INTF_DATA_AVAIL_INT = 0x001,
+};
+
+enum tis_defaults {
+ TIS_SHORT_TIMEOUT = 750, /* ms */
+ TIS_LONG_TIMEOUT = 2000, /* 2 sec */
+};
+
+#define TCM_ACCESS(l) (0x0000 | ((l) << 12))
+#define TCM_INT_ENABLE(l) (0x0008 | ((l) << 12)) /* interperet */
+#define TCM_INT_VECTOR(l) (0x000C | ((l) << 12))
+#define TCM_INT_STATUS(l) (0x0010 | ((l) << 12))
+#define TCM_INTF_CAPS(l) (0x0014 | ((l) << 12))
+#define TCM_STS(l) (0x0018 | ((l) << 12))
+#define TCM_DATA_FIFO(l) (0x0024 | ((l) << 12))
+
+#define TCM_DID_VID(l) (0x0F00 | ((l) << 12))
+#define TCM_RID(l) (0x0F04 | ((l) << 12))
+
+#define TIS_MEM_BASE_huawei 0x3fed40000LL
+
+#define MAX_SPI_FRAMESIZE 64
+
+//
+#define _CPU_FT2000A4
+#define REUSE_CONF_REG_BASE 0x28180208
+#define REUSE_GPIO1_A5_BASE 0x28005000
+
+static void *__iomem reuse_conf_reg;
+static void *__iomem gpio1_a5;
+
+//
+static LIST_HEAD(tis_chips);
+static DEFINE_SPINLOCK(tis_lock);
+
+struct chip_data {
+ u8 cs;
+ u8 tmode;
+ u8 type;
+ u8 poll_mode;
+ u16 clk_div;
+ u32 speed_hz;
+ void (*cs_control)(u32 command);
+};
+
+struct tcm_tis_spi_phy {
+ struct spi_device *spi_device;
+ struct completion ready;
+ u8 *iobuf;
+};
+
+int tcm_tis_spi_transfer(struct device *dev, u32 addr, u16 len,
+ u8 *in, const u8 *out)
+{
+ struct tcm_tis_spi_phy *phy = dev_get_drvdata(dev);
+ int ret = 0;
+ struct spi_message m;
+ struct spi_transfer spi_xfer;
+ u8 transfer_len;
+
+ tcm_dbg("TCM-dbg: %s, addr: 0x%x, len: %x, %s\n",
+ __func__, addr, len, (in) ? "in" : "out");
+
+ spi_bus_lock(phy->spi_device->master);
+
+ /* set gpio1_a5 to LOW */
+ if (is_ft_all() && (phy->spi_device->chip_select == 0)) {
+ iowrite32(0x0, gpio1_a5);
+ }
+
+ while (len) {
+ transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
+
+ phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
+ phy->iobuf[1] = 0xd4;
+ phy->iobuf[2] = addr >> 8;
+ phy->iobuf[3] = addr;
+
+ memset(&spi_xfer, 0, sizeof(spi_xfer));
+ spi_xfer.tx_buf = phy->iobuf;
+ spi_xfer.rx_buf = phy->iobuf;
+ spi_xfer.len = 4;
+ spi_xfer.cs_change = 1;
+
+ spi_message_init(&m);
+ spi_message_add_tail(&spi_xfer, &m);
+ ret = spi_sync_locked(phy->spi_device, &m);
+ if (ret < 0)
+ goto exit;
+
+ spi_xfer.cs_change = 0;
+ spi_xfer.len = transfer_len;
+ spi_xfer.delay_usecs = 5;
+
+ if (in) {
+ spi_xfer.tx_buf = NULL;
+ } else if (out) {
+ spi_xfer.rx_buf = NULL;
+ memcpy(phy->iobuf, out, transfer_len);
+ out += transfer_len;
+ }
+
+ spi_message_init(&m);
+ spi_message_add_tail(&spi_xfer, &m);
+ reinit_completion(&phy->ready);
+ ret = spi_sync_locked(phy->spi_device, &m);
+ if (ret < 0)
+ goto exit;
+
+ if (in) {
+ memcpy(in, phy->iobuf, transfer_len);
+ in += transfer_len;
+ }
+
+ len -= transfer_len;
+ }
+
+exit:
+ /* set gpio1_a5 to HIGH */
+ if (is_ft_all() && (phy->spi_device->chip_select == 0)) {
+ iowrite32(0x20, gpio1_a5);
+ }
+
+ spi_bus_unlock(phy->spi_device->master);
+ tcm_dbg("TCM-dbg: ret: %d\n", ret);
+ return ret;
+}
+
+static int tcm_tis_read8(struct device *dev,
+ u32 addr, u16 len, u8 *result)
+{
+ return tcm_tis_spi_transfer(dev, addr, len, result, NULL);
+}
+
+static int tcm_tis_write8(struct device *dev,
+ u32 addr, u16 len, u8 *value)
+{
+ return tcm_tis_spi_transfer(dev, addr, len, NULL, value);
+}
+
+static int tcm_tis_readb(struct device *dev, u32 addr, u8 *value)
+{
+ return tcm_tis_read8(dev, addr, sizeof(u8), value);
+}
+
+static int tcm_tis_writeb(struct device *dev, u32 addr, u8 value)
+{
+ return tcm_tis_write8(dev, addr, sizeof(u8), &value);
+}
+
+static int tcm_tis_readl(struct device *dev, u32 addr, u32 *result)
+{
+ int rc;
+ __le32 result_le;
+
+ rc = tcm_tis_read8(dev, addr, sizeof(u32), (u8 *)&result_le);
+ tcm_dbg("TCM-dbg: result_le: 0x%x\n", result_le);
+ if (!rc)
+ *result = le32_to_cpu(result_le);
+
+ return rc;
+}
+
+static int tcm_tis_writel(struct device *dev, u32 addr, u32 value)
+{
+ int rc;
+ __le32 value_le;
+
+ value_le = cpu_to_le32(value);
+ rc = tcm_tis_write8(dev, addr, sizeof(u32), (u8 *)&value_le);
+
+ return rc;
+}
+
+static int request_locality(struct tcm_chip *chip, int l);
+static void release_locality(struct tcm_chip *chip, int l, int force);
+static void cleanup_tis(void)
+{
+ int ret;
+ u32 inten;
+ struct tcm_vendor_specific *i, *j;
+ struct tcm_chip *chip;
+
+ spin_lock(&tis_lock);
+ list_for_each_entry_safe(i, j, &tis_chips, list) {
+ chip = to_tcm_chip(i);
+ ret = tcm_tis_readl(chip->dev,
+ TCM_INT_ENABLE(chip->vendor.locality), &inten);
+ if (ret < 0)
+ return;
+
+ tcm_tis_writel(chip->dev, TCM_INT_ENABLE(chip->vendor.locality),
+ ~TCM_GLOBAL_INT_ENABLE & inten);
+ release_locality(chip, chip->vendor.locality, 1);
+ }
+ spin_unlock(&tis_lock);
+}
+
+static void tcm_tis_init(struct tcm_chip *chip)
+{
+ int ret;
+ u8 rid;
+ u32 vendor, intfcaps;
+
+ ret = tcm_tis_readl(chip->dev, TCM_DID_VID(0), &vendor);
+
+ if ((vendor & 0xffff) != 0x19f5 && (vendor & 0xffff) != 0x1B4E)
+ pr_info("there is no Nationz TCM on you computer\n");
+
+ ret = tcm_tis_readb(chip->dev, TCM_RID(0), &rid);
+ if (ret < 0)
+ return;
+ pr_info("kylin: 2019-09-21 1.2 TCM (device-id 0x%X, rev-id %d)\n",
+ vendor >> 16, rid);
+
+ /* Figure out the capabilities */
+ ret = tcm_tis_readl(chip->dev,
+ TCM_INTF_CAPS(chip->vendor.locality), &intfcaps);
+ if (ret < 0)
+ return;
+
+ if (request_locality(chip, 0) != 0)
+ pr_err("tcm request_locality err\n");
+
+ atomic_set(&chip->data_pending, 0);
+}
+
+static void tcm_handle_err(struct tcm_chip *chip)
+{
+ cleanup_tis();
+ tcm_tis_init(chip);
+}
+
+static bool check_locality(struct tcm_chip *chip, int l)
+{
+ int ret;
+ u8 access;
+
+ ret = tcm_tis_readb(chip->dev, TCM_ACCESS(l), &access);
+ tcm_dbg("TCM-dbg: access: 0x%x\n", access);
+ if (ret < 0)
+ return false;
+
+ if ((access & (TCM_ACCESS_ACTIVE_LOCALITY | TCM_ACCESS_VALID)) ==
+ (TCM_ACCESS_ACTIVE_LOCALITY | TCM_ACCESS_VALID)) {
+ chip->vendor.locality = l;
+ return true;
+ }
+
+ return false;
+}
+
+static int request_locality(struct tcm_chip *chip, int l)
+{
+ unsigned long stop;
+
+ if (check_locality(chip, l))
+ return l;
+
+ tcm_tis_writeb(chip->dev, TCM_ACCESS(l), TCM_ACCESS_REQUEST_USE);
+
+ /* wait for burstcount */
+ stop = jiffies + chip->vendor.timeout_a;
+ do {
+ if (check_locality(chip, l))
+ return l;
+ msleep(TCM_TIMEOUT);
+ } while (time_before(jiffies, stop));
+
+ return -1;
+}
+
+static void release_locality(struct tcm_chip *chip, int l, int force)
+{
+ int ret;
+ u8 access;
+
+ ret = tcm_tis_readb(chip->dev, TCM_ACCESS(l), &access);
+ if (ret < 0)
+ return;
+ if (force || (access & (TCM_ACCESS_REQUEST_PENDING | TCM_ACCESS_VALID)) ==
+ (TCM_ACCESS_REQUEST_PENDING | TCM_ACCESS_VALID))
+ tcm_tis_writeb(chip->dev,
+ TCM_ACCESS(l), TCM_ACCESS_ACTIVE_LOCALITY);
+}
+
+static u8 tcm_tis_status(struct tcm_chip *chip)
+{
+ int ret;
+ u8 status;
+
+ ret = tcm_tis_readb(chip->dev,
+ TCM_STS(chip->vendor.locality), &status);
+ tcm_dbg("TCM-dbg: status: 0x%x\n", status);
+ if (ret < 0)
+ return 0;
+
+ return status;
+}
+
+static void tcm_tis_ready(struct tcm_chip *chip)
+{
+ /* this causes the current command to be aboreted */
+ tcm_tis_writeb(chip->dev, TCM_STS(chip->vendor.locality),
+ TCM_STS_COMMAND_READY);
+}
+
+static int get_burstcount(struct tcm_chip *chip)
+{
+ int ret;
+ unsigned long stop;
+ u8 tmp, tmp1;
+ int burstcnt = 0;
+
+ /* wait for burstcount */
+ /* which timeout value, spec has 2 answers (c & d) */
+ stop = jiffies + chip->vendor.timeout_d;
+ do {
+ ret = tcm_tis_readb(chip->dev,
+ TCM_STS(chip->vendor.locality) + 1,
+ &tmp);
+ tcm_dbg("TCM-dbg: burstcnt: 0x%x\n", burstcnt);
+ if (ret < 0)
+ return -EINVAL;
+ ret = tcm_tis_readb(chip->dev,
+ (TCM_STS(chip->vendor.locality) + 2),
+ &tmp1);
+ tcm_dbg("TCM-dbg: burstcnt: 0x%x\n", burstcnt);
+ if (ret < 0)
+ return -EINVAL;
+
+ burstcnt = tmp | (tmp1 << 8);
+ if (burstcnt)
+ return burstcnt;
+ msleep(TCM_TIMEOUT);
+ } while (time_before(jiffies, stop));
+
+ return -EBUSY;
+}
+
+static int wait_for_stat(struct tcm_chip *chip, u8 mask,
+ unsigned long timeout,
+ wait_queue_head_t *queue)
+{
+ unsigned long stop;
+ u8 status;
+
+ /* check current status */
+ status = tcm_tis_status(chip);
+ if ((status & mask) == mask)
+ return 0;
+
+ stop = jiffies + timeout;
+ do {
+ msleep(TCM_TIMEOUT);
+ status = tcm_tis_status(chip);
+ if ((status & mask) == mask)
+ return 0;
+ } while (time_before(jiffies, stop));
+
+ return -ETIME;
+}
+
+static int recv_data(struct tcm_chip *chip, u8 *buf, size_t count)
+{
+ int ret;
+ int size = 0, burstcnt;
+
+ while (size < count && wait_for_stat(chip,
+ TCM_STS_DATA_AVAIL | TCM_STS_VALID,
+ chip->vendor.timeout_c,
+ &chip->vendor.read_queue) == 0) {
+ burstcnt = get_burstcount(chip);
+
+ if (burstcnt < 0) {
+ dev_err(chip->dev, "Unable to read burstcount\n");
+ return burstcnt;
+ }
+
+ for (; burstcnt > 0 && size < count; burstcnt--) {
+ ret = tcm_tis_readb(chip->dev,
+ TCM_DATA_FIFO(chip->vendor.locality),
+ &buf[size]);
+ tcm_dbg("TCM-dbg: buf[%d]: 0x%x\n", size, buf[size]);
+ size++;
+ }
+ }
+
+ return size;
+}
+
+static int tcm_tis_recv(struct tcm_chip *chip, u8 *buf, size_t count)
+{
+ int size = 0;
+ int expected, status;
+ unsigned long stop;
+
+ if (count < TCM_HEADER_SIZE) {
+ dev_err(chip->dev, "read size is to small: %d\n", (u32)(count));
+ size = -EIO;
+ goto out;
+ }
+
+ /* read first 10 bytes, including tag, paramsize, and result */
+ size = recv_data(chip, buf, TCM_HEADER_SIZE);
+ if (size < TCM_HEADER_SIZE) {
+ dev_err(chip->dev, "Unable to read header\n");
+ goto out;
+ }
+
+ expected = be32_to_cpu(*(__be32 *)(buf + 2));
+ if (expected > count) {
+ dev_err(chip->dev, "Expected data count\n");
+ size = -EIO;
+ goto out;
+ }
+
+ size += recv_data(chip, &buf[TCM_HEADER_SIZE],
+ expected - TCM_HEADER_SIZE);
+ if (size < expected) {
+ dev_err(chip->dev, "Unable to read remainder of result\n");
+ size = -ETIME;
+ goto out;
+ }
+
+ wait_for_stat(chip, TCM_STS_VALID, chip->vendor.timeout_c,
+ &chip->vendor.int_queue);
+
+ stop = jiffies + chip->vendor.timeout_c;
+ do {
+ msleep(TCM_TIMEOUT);
+ status = tcm_tis_status(chip);
+ if ((status & TCM_STS_DATA_AVAIL) == 0)
+ break;
+
+ } while (time_before(jiffies, stop));
+
+ status = tcm_tis_status(chip);
+ if (status & TCM_STS_DATA_AVAIL) { /* retry? */
+ dev_err(chip->dev, "Error left over data\n");
+ size = -EIO;
+ goto out;
+ }
+
+out:
+ tcm_tis_ready(chip);
+ release_locality(chip, chip->vendor.locality, 0);
+ if (size < 0)
+ tcm_handle_err(chip);
+ return size;
+}
+
+/*
+ * If interrupts are used (signaled by an irq set in the vendor structure)
+ * tcm.c can skip polling for the data to be available as the interrupt is
+ * waited for here
+ */
+static int tcm_tis_send(struct tcm_chip *chip, u8 *buf, size_t len)
+{
+ int rc, status, burstcnt;
+ size_t count = 0;
+ u32 ordinal;
+ unsigned long stop;
+ int send_again = 0;
+
+tcm_tis_send_again:
+ count = 0;
+ if (request_locality(chip, 0) < 0) {
+ dev_err(chip->dev, "send, tcm is busy\n");
+ return -EBUSY;
+ }
+ status = tcm_tis_status(chip);
+
+ if ((status & TCM_STS_COMMAND_READY) == 0) {
+ tcm_tis_ready(chip);
+ if (wait_for_stat(chip, TCM_STS_COMMAND_READY,
+ chip->vendor.timeout_b, &chip->vendor.int_queue) < 0) {
+ dev_err(chip->dev, "send, tcm wait time out1\n");
+ rc = -ETIME;
+ goto out_err;
+ }
+ }
+
+ while (count < len - 1) {
+ burstcnt = get_burstcount(chip);
+ if (burstcnt < 0) {
+ dev_err(chip->dev, "Unable to read burstcount\n");
+ rc = burstcnt;
+ goto out_err;
+ }
+ for (; burstcnt > 0 && count < len - 1; burstcnt--) {
+ tcm_tis_writeb(chip->dev,
+ TCM_DATA_FIFO(chip->vendor.locality), buf[count]);
+ count++;
+ }
+
+ wait_for_stat(chip, TCM_STS_VALID, chip->vendor.timeout_c,
+ &chip->vendor.int_queue);
+ }
+
+ /* write last byte */
+ tcm_tis_writeb(chip->dev,
+ TCM_DATA_FIFO(chip->vendor.locality), buf[count]);
+
+ wait_for_stat(chip, TCM_STS_VALID,
+ chip->vendor.timeout_c, &chip->vendor.int_queue);
+ stop = jiffies + chip->vendor.timeout_c;
+ do {
+ msleep(TCM_TIMEOUT);
+ status = tcm_tis_status(chip);
+ if ((status & TCM_STS_DATA_EXPECT) == 0)
+ break;
+
+ } while (time_before(jiffies, stop));
+
+ if ((status & TCM_STS_DATA_EXPECT) != 0) {
+ dev_err(chip->dev, "send, tcm expect data\n");
+ rc = -EIO;
+ goto out_err;
+ }
+
+ /* go and do it */
+ tcm_tis_writeb(chip->dev, TCM_STS(chip->vendor.locality), TCM_STS_GO);
+
+ ordinal = be32_to_cpu(*((__be32 *)(buf + 6)));
+ if (wait_for_stat(chip, TCM_STS_DATA_AVAIL | TCM_STS_VALID,
+ tcm_calc_ordinal_duration(chip, ordinal),
+ &chip->vendor.read_queue) < 0) {
+ dev_err(chip->dev, "send, tcm wait time out2\n");
+ rc = -ETIME;
+ goto out_err;
+ }
+
+ return len;
+
+out_err:
+ tcm_tis_ready(chip);
+ release_locality(chip, chip->vendor.locality, 0);
+ tcm_handle_err(chip);
+ if (send_again++ < 3) {
+ goto tcm_tis_send_again;
+ }
+
+ dev_err(chip->dev, "kylin send, err: %d\n", rc);
+ return rc;
+}
+
+static struct file_operations tis_ops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .open = tcm_open,
+ .read = tcm_read,
+ .write = tcm_write,
+ .release = tcm_release,
+};
+
+static DEVICE_ATTR(pubek, S_IRUGO, tcm_show_pubek, NULL);
+static DEVICE_ATTR(pcrs, S_IRUGO, tcm_show_pcrs, NULL);
+static DEVICE_ATTR(enabled, S_IRUGO, tcm_show_enabled, NULL);
+static DEVICE_ATTR(active, S_IRUGO, tcm_show_active, NULL);
+static DEVICE_ATTR(owned, S_IRUGO, tcm_show_owned, NULL);
+static DEVICE_ATTR(temp_deactivated, S_IRUGO, tcm_show_temp_deactivated,
+ NULL);
+static DEVICE_ATTR(caps, S_IRUGO, tcm_show_caps, NULL);
+static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tcm_store_cancel);
+
+static struct attribute *tis_attrs[] = {
+ &dev_attr_pubek.attr,
+ &dev_attr_pcrs.attr,
+ &dev_attr_enabled.attr,
+ &dev_attr_active.attr,
+ &dev_attr_owned.attr,
+ &dev_attr_temp_deactivated.attr,
+ &dev_attr_caps.attr,
+ &dev_attr_cancel.attr, NULL,
+};
+
+static struct attribute_group tis_attr_grp = {
+ .attrs = tis_attrs
+};
+
+static struct tcm_vendor_specific tcm_tis = {
+ .status = tcm_tis_status,
+ .recv = tcm_tis_recv,
+ .send = tcm_tis_send,
+ .cancel = tcm_tis_ready,
+ .req_complete_mask = TCM_STS_DATA_AVAIL | TCM_STS_VALID,
+ .req_complete_val = TCM_STS_DATA_AVAIL | TCM_STS_VALID,
+ .req_canceled = TCM_STS_COMMAND_READY,
+ .attr_group = &tis_attr_grp,
+ .miscdev = {
+ .fops = &tis_ops,
+ },
+};
+
+static struct tcm_chip *chip;
+static int tcm_tis_spi_probe(struct spi_device *spi)
+{
+ int ret;
+ u8 revid;
+ u32 vendor, intfcaps;
+ struct tcm_tis_spi_phy *phy;
+ struct chip_data *spi_chip;
+
+ pr_info("TCM(ky): __func__(v=%d) ..\n",
+ 10);
+
+ tcm_dbg("TCM-dbg: %s/%d, enter\n", __func__, __LINE__);
+ phy = devm_kzalloc(&spi->dev, sizeof(struct tcm_tis_spi_phy),
+ GFP_KERNEL);
+ if (!phy)
+ return -ENOMEM;
+
+ phy->iobuf = devm_kmalloc(&spi->dev, MAX_SPI_FRAMESIZE, GFP_KERNEL);
+ if (!phy->iobuf)
+ return -ENOMEM;
+
+ phy->spi_device = spi;
+ init_completion(&phy->ready);
+
+ tcm_dbg("TCM-dbg: %s/%d\n", __func__, __LINE__);
+ /* init spi dev */
+ spi->chip_select = 0; /* cs0 */
+ spi->mode = SPI_MODE_0;
+ spi->bits_per_word = 8;
+ spi->max_speed_hz = spi->max_speed_hz ? : 24000000;
+ spi_setup(spi);
+
+ spi_chip = spi_get_ctldata(spi);
+ if (!spi_chip) {
+ pr_err("There was wrong in spi master\n");
+ return -ENODEV;
+ }
+ /* tcm does not support interrupt mode, we use poll mode instead. */
+ spi_chip->poll_mode = 1;
+
+ tcm_dbg("TCM-dbg: %s/%d\n", __func__, __LINE__);
+ /* regiter tcm hw */
+ chip = tcm_register_hardware(&spi->dev, &tcm_tis);
+ if (!chip) {
+ dev_err(chip->dev, "tcm register hardware err\n");
+ return -ENODEV;
+ }
+
+ dev_set_drvdata(chip->dev, phy);
+
+ /**
+ * phytium2000a4 spi controller's clk clk level is unstable,
+ * so it is solved by using the low level of gpio output.
+ **/
+ if (is_ft_all() && (spi->chip_select == 0)) {
+ /* reuse conf reg base */
+ reuse_conf_reg = ioremap(REUSE_CONF_REG_BASE, 0x10);
+ if (!reuse_conf_reg) {
+ dev_err(&spi->dev, "Failed to ioremap reuse conf reg\n");
+ ret = -ENOMEM;
+ goto out_err;
+ }
+
+ /* gpio1 a5 base addr */
+ gpio1_a5 = ioremap(REUSE_GPIO1_A5_BASE, 0x10);
+ if (!gpio1_a5) {
+ dev_err(&spi->dev, "Failed to ioremap gpio1 a5\n");
+ ret = -ENOMEM;
+ goto out_err;
+ }
+
+ /* reuse cs0 to gpio1_a5 */
+ iowrite32((ioread32(reuse_conf_reg) | 0xFFFF0) & 0xFFF9004F,
+ reuse_conf_reg);
+ /* set gpio1 a5 to output */
+ iowrite32(0x20, gpio1_a5 + 0x4);
+ }
+
+ tcm_dbg("TCM-dbg: %s/%d\n",
+ __func__, __LINE__);
+ ret = tcm_tis_readl(chip->dev, TCM_DID_VID(0), &vendor);
+ if (ret < 0)
+ goto out_err;
+
+ tcm_dbg("TCM-dbg: %s/%d, vendor: 0x%x\n",
+ __func__, __LINE__, vendor);
+ if ((vendor & 0xffff) != 0x19f5 && (vendor & 0xffff) != 0x1B4E) {
+ dev_err(chip->dev, "there is no Nationz TCM on you computer\n");
+ goto out_err;
+ }
+
+ ret = tcm_tis_readb(chip->dev, TCM_RID(0), &revid);
+ tcm_dbg("TCM-dbg: %s/%d, revid: 0x%x\n",
+ __func__, __LINE__, revid);
+ if (ret < 0)
+ goto out_err;
+ dev_info(chip->dev, "kylin: 2019-09-21 1.2 TCM "
+ "(device-id 0x%X, rev-id %d)\n",
+ vendor >> 16, revid);
+
+ /* Default timeouts */
+ chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
+ chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT);
+ chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
+ chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
+
+ tcm_dbg("TCM-dbg: %s/%d\n",
+ __func__, __LINE__);
+ /* Figure out the capabilities */
+ ret = tcm_tis_readl(chip->dev,
+ TCM_INTF_CAPS(chip->vendor.locality), &intfcaps);
+ if (ret < 0)
+ goto out_err;
+
+ tcm_dbg("TCM-dbg: %s/%d, intfcaps: 0x%x\n",
+ __func__, __LINE__, intfcaps);
+ if (request_locality(chip, 0) != 0) {
+ dev_err(chip->dev, "tcm request_locality err\n");
+ ret = -ENODEV;
+ goto out_err;
+ }
+
+ INIT_LIST_HEAD(&chip->vendor.list);
+ spin_lock(&tis_lock);
+ list_add(&chip->vendor.list, &tis_chips);
+ spin_unlock(&tis_lock);
+
+ tcm_get_timeouts(chip);
+ tcm_startup(chip);
+
+ tcm_dbg("TCM-dbg: %s/%d, exit\n", __func__, __LINE__);
+ return 0;
+
+out_err:
+ if (is_ft_all()) {
+ if (reuse_conf_reg)
+ iounmap(reuse_conf_reg);
+ if (gpio1_a5)
+ iounmap(gpio1_a5);
+ }
+ tcm_dbg("TCM-dbg: %s/%d, error\n", __func__, __LINE__);
+ dev_set_drvdata(chip->dev, chip);
+ tcm_remove_hardware(chip->dev);
+
+ return ret;
+}
+
+static int tcm_tis_spi_remove(struct spi_device *dev)
+{
+ if (is_ft_all()) {
+ if (reuse_conf_reg)
+ iounmap(reuse_conf_reg);
+ if (gpio1_a5)
+ iounmap(gpio1_a5);
+ }
+
+ dev_info(&dev->dev, "%s\n", __func__);
+ dev_set_drvdata(chip->dev, chip);
+ tcm_remove_hardware(&dev->dev);
+
+ return 0;
+}
+
+static const struct acpi_device_id tcm_tis_spi_acpi_match[] = {
+ {"TCMS0001", 0},
+ {"SMO0768", 0},
+ {"ZIC0601", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(acpi, tcm_tis_spi_acpi_match);
+
+static const struct spi_device_id tcm_tis_spi_id_table[] = {
+ {"SMO0768", 0},
+ {"ZIC0601", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(spi, tcm_tis_spi_id_table);
+
+static struct spi_driver tcm_tis_spi_drv = {
+ .driver = {
+ .name = "tcm_tis_spi",
+ .acpi_match_table = ACPI_PTR(tcm_tis_spi_acpi_match),
+ },
+ .id_table = tcm_tis_spi_id_table,
+ .probe = tcm_tis_spi_probe,
+ .remove = tcm_tis_spi_remove,
+};
+
+#if 1
+module_spi_driver(tcm_tis_spi_drv);
+#else/*0*/
+
+static int __init __spi_driver_init(void)
+{
+ pr_info("TCM(ky): __init __func__(ver=%2d)\n",
+ 10);
+ return spi_register_driver(&tcm_tis_spi_drv);
+}
+
+static void __exit __spi_driver_exit(void)
+{
+ pr_info("TCM(ky): __exit __func__\n");
+ spi_unregister_driver(&tcm_tis_spi_drv);
+}
+
+module_init(__spi_driver_init);
+module_exit(__spi_driver_exit);
+#endif/*0*/
+
+MODULE_AUTHOR("xiongxin<xiongxin(a)tj.kylinos.cn>");
+MODULE_DESCRIPTION("TCM Driver Base Spi");
+MODULE_VERSION("6.1.0.2");
+MODULE_LICENSE("GPL");
--
2.23.0
1
1

23 Feb '21
1
0

23 Feb '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
Introducing a feature of CPU PARK in order to save time
of cpus down and up during kexec, which may cost 250ms of
per cpu's down and 30ms of up.
As a result, for 128 cores, it costs more than 30 seconds
to down and up cpus during kexec. Think about 256 cores and more.
CPU PARK is a state that cpu power-on and staying in spin loop, polling
for exit chances, such as writing exit address.
Reserving a block of memory, to fill with cpu park text section,
exit address and park-magic-flag of each cpu. In implementation,
reserved one page for one cpu core.
Cpus going to park state instead of down in machine_shutdown().
Cpus going out of park state in smp_init instead of brought up.
One of cpu park sections in pre-reserved memory blocks,:
+--------------+
+ exit address +
+--------------+
+ park magic +
+--------------+
+ park codes +
+ . +
+ . +
+ . +
+--------------+
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/arm64/Kconfig | 12 ++
arch/arm64/include/asm/kexec.h | 6 +
arch/arm64/include/asm/smp.h | 15 +++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/cpu-park.S | 59 ++++++++++
arch/arm64/kernel/machine_kexec.c | 2 +-
arch/arm64/kernel/process.c | 4 +
arch/arm64/kernel/smp.c | 229 ++++++++++++++++++++++++++++++++++++++
arch/arm64/mm/init.c | 55 +++++++++
9 files changed, 382 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/cpu-park.S
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b9c5654..0885668 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -345,6 +345,18 @@ config KASAN_SHADOW_OFFSET
default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
+config ARM64_CPU_PARK
+ bool "Support CPU PARK on kexec"
+ depends on SMP
+ depends on KEXEC_CORE
+ help
+ This enables support for CPU PARK feature in
+ order to save time of cpu down to up.
+ CPU park is a state through kexec, spin loop
+ instead of cpu die before jumping to new kernel,
+ jumping out from loop to new kernel entry in
+ smp_init.
+
source "arch/arm64/Kconfig.platforms"
menu "Kernel Features"
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 79909ae..a133889 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -36,6 +36,11 @@
#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE
+#ifdef CONFIG_ARM64_CPU_PARK
+/* CPU park state flag: "park" */
+#define PARK_MAGIC 0x7061726b
+#endif
+
#ifndef __ASSEMBLY__
/**
@@ -104,6 +109,7 @@ static inline void crash_post_resume(void) {}
#ifdef CONFIG_KEXEC_CORE
extern void __init reserve_crashkernel(void);
#endif
+void machine_kexec_mask_interrupts(void);
#ifdef CONFIG_KEXEC_FILE
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 2e7f529..8c5d2d6 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -145,6 +145,21 @@ bool cpus_are_stuck_in_kernel(void);
extern void crash_smp_send_stop(void);
extern bool smp_crash_stop_failed(void);
+#ifdef CONFIG_ARM64_CPU_PARK
+#define PARK_SECTION_SIZE 1024
+struct cpu_park_info {
+ /* Physical address of reserved park memory. */
+ unsigned long start;
+ /* park reserve mem len should be PARK_SECTION_SIZE * NR_CPUS */
+ unsigned long len;
+ /* Virtual address of reserved park memory. */
+ unsigned long start_v;
+};
+extern struct cpu_park_info park_info;
+extern void enter_cpu_park(unsigned long text, unsigned long exit);
+extern void do_cpu_park(unsigned long exit);
+extern int kexec_smp_send_park(void);
+#endif
#endif /* ifndef __ASSEMBLY__ */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2621d5c..60478d2 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -54,6 +54,7 @@ obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \
cpu-reset.o
+obj-$(CONFIG_ARM64_CPU_PARK) += cpu-park.o
obj-$(CONFIG_KEXEC_FILE) += machine_kexec_file.o kexec_image.o
obj-$(CONFIG_ARM64_RELOC_TEST) += arm64-reloc-test.o
arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
diff --git a/arch/arm64/kernel/cpu-park.S b/arch/arm64/kernel/cpu-park.S
new file mode 100644
index 0000000..10c685c
--- /dev/null
+++ b/arch/arm64/kernel/cpu-park.S
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * CPU park routines
+ *
+ * Copyright (C) 2020 Huawei Technologies., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <asm/kexec.h>
+#include <asm/sysreg.h>
+#include <asm/virt.h>
+
+.text
+.pushsection .idmap.text, "awx"
+
+/* cpu park helper in idmap section */
+SYM_CODE_START(enter_cpu_park)
+ /* Clear sctlr_el1 flags. */
+ mrs x12, sctlr_el1
+ mov_q x13, SCTLR_ELx_FLAGS
+ bic x12, x12, x13
+ pre_disable_mmu_workaround
+ msr sctlr_el1, x12 /* disable mmu */
+ isb
+
+ mov x18, x0
+ mov x0, x1 /* secondary_entry addr */
+ br x18 /* call do_cpu_park of each cpu */
+SYM_CODE_END(enter_cpu_park)
+
+.popsection
+
+SYM_CODE_START(do_cpu_park)
+ ldr x18, =PARK_MAGIC /* magic number "park" */
+ add x1, x0, #8
+ str x18, [x1] /* set on-park flag */
+ dc civac, x1 /* flush cache of "park" */
+ dsb nsh
+ isb
+
+.Lloop:
+ wfe
+ isb
+ ldr x19, [x0]
+ cmp x19, #0 /* test secondary_entry */
+ b.eq .Lloop
+
+ ic iallu /* invalidate the local I-cache */
+ dsb nsh
+ isb
+
+ br x19 /* jump to secondary_entry */
+SYM_CODE_END(do_cpu_park)
+
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144c..f47ce96 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -213,7 +213,7 @@ void machine_kexec(struct kimage *kimage)
BUG(); /* Should never get here. */
}
-static void machine_kexec_mask_interrupts(void)
+void machine_kexec_mask_interrupts(void)
{
unsigned int i;
struct irq_desc *desc;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 73e3b32..10cffee 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -146,6 +146,10 @@ void arch_cpu_idle_dead(void)
*/
void machine_shutdown(void)
{
+#ifdef CONFIG_ARM64_CPU_PARK
+ if (kexec_smp_send_park() == 0)
+ return;
+#endif
smp_shutdown_nonboot_cpus(reboot_cpu);
}
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 18e9727..bc475d5 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -32,6 +32,8 @@
#include <linux/irq_work.h>
#include <linux/kernel_stat.h>
#include <linux/kexec.h>
+#include <linux/console.h>
+
#include <linux/kvm_host.h>
#include <asm/alternative.h>
@@ -93,6 +95,167 @@ static inline int op_cpu_kill(unsigned int cpu)
}
#endif
+#ifdef CONFIG_ARM64_CPU_PARK
+struct cpu_park_section {
+ unsigned long exit; /* exit address of park look */
+ unsigned long magic; /* maigc represent park state */
+ char text[0]; /* text section of park */
+};
+
+static int mmap_cpu_park_mem(void)
+{
+ if (!park_info.start)
+ return -ENOMEM;
+
+ if (park_info.start_v)
+ return 0;
+
+ park_info.start_v = (unsigned long)__ioremap(park_info.start,
+ park_info.len,
+ PAGE_KERNEL_EXEC);
+ if (!park_info.start_v) {
+ pr_warn("map park memory failed.");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static inline unsigned long cpu_park_section_v(unsigned int cpu)
+{
+ return park_info.start_v + PARK_SECTION_SIZE * (cpu - 1);
+}
+
+static inline unsigned long cpu_park_section_p(unsigned int cpu)
+{
+ return park_info.start + PARK_SECTION_SIZE * (cpu - 1);
+}
+
+/*
+ * Write the secondary_entry to exit section of park state.
+ * Then the secondary cpu will jump straight into the kernel
+ * by the secondary_entry.
+ */
+static int write_park_exit(unsigned int cpu)
+{
+ struct cpu_park_section *park_section;
+ unsigned long *park_exit;
+ unsigned long *park_text;
+
+ if (mmap_cpu_park_mem() != 0)
+ return -EPERM;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ park_exit = &park_section->exit;
+ park_text = (unsigned long *)park_section->text;
+ pr_debug("park_text 0x%lx : 0x%lx, do_cpu_park text 0x%lx : 0x%lx",
+ (unsigned long)park_text, *park_text,
+ (unsigned long)do_cpu_park,
+ *(unsigned long *)do_cpu_park);
+
+ /*
+ * Test first 8 bytes to determine
+ * whether needs to write cpu park exit.
+ */
+ if (*park_text == *(unsigned long *)do_cpu_park) {
+ writeq_relaxed(__pa_symbol(secondary_entry), park_exit);
+ __flush_dcache_area((__force void *)park_exit,
+ sizeof(unsigned long));
+ flush_icache_range((unsigned long)park_exit,
+ (unsigned long)(park_exit + 1));
+ sev();
+ dsb(sy);
+ isb();
+
+ pr_debug("Write cpu %u secondary entry 0x%lx to 0x%lx.",
+ cpu, *park_exit, (unsigned long)park_exit);
+ pr_info("Boot cpu %u from PARK state.", cpu);
+ return 0;
+ }
+
+ return -EPERM;
+}
+
+/* Install cpu park sections for the specific cpu. */
+static int install_cpu_park(unsigned int cpu)
+{
+ struct cpu_park_section *park_section;
+ unsigned long *park_exit;
+ unsigned long *park_magic;
+ unsigned long park_text_len;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ pr_debug("Install cpu park on cpu %u park exit 0x%lx park text 0x%lx",
+ cpu, (unsigned long)park_section,
+ (unsigned long)(park_section->text));
+
+ park_exit = &park_section->exit;
+ park_magic = &park_section->magic;
+ park_text_len = PARK_SECTION_SIZE - sizeof(struct cpu_park_section);
+
+ *park_exit = 0UL;
+ *park_magic = 0UL;
+ memcpy((void *)park_section->text, do_cpu_park, park_text_len);
+ __flush_dcache_area((void *)park_section, PARK_SECTION_SIZE);
+
+ return 0;
+}
+
+static int uninstall_cpu_park(unsigned int cpu)
+{
+ unsigned long park_section;
+
+ if (mmap_cpu_park_mem() != 0)
+ return -EPERM;
+
+ park_section = cpu_park_section_v(cpu);
+ memset((void *)park_section, 0, PARK_SECTION_SIZE);
+ __flush_dcache_area((void *)park_section, PARK_SECTION_SIZE);
+
+ return 0;
+}
+
+static int cpu_wait_park(unsigned int cpu)
+{
+ long timeout;
+ struct cpu_park_section *park_section;
+
+ volatile unsigned long *park_magic;
+
+ park_section = (struct cpu_park_section *)cpu_park_section_v(cpu);
+ park_magic = &park_section->magic;
+
+ timeout = USEC_PER_SEC;
+ while (*park_magic != PARK_MAGIC && timeout--)
+ udelay(1);
+
+ if (timeout > 0)
+ pr_debug("cpu %u park done.", cpu);
+ else
+ pr_err("cpu %u park failed.", cpu);
+
+ return *park_magic == PARK_MAGIC;
+}
+
+static void cpu_park(unsigned int cpu)
+{
+ unsigned long park_section_p;
+ unsigned long park_exit_phy;
+ unsigned long do_park;
+ typeof(enter_cpu_park) *park;
+
+ park_section_p = cpu_park_section_p(cpu);
+ park_exit_phy = park_section_p;
+ pr_debug("Go to park cpu %u exit address 0x%lx", cpu, park_exit_phy);
+
+ do_park = park_section_p + sizeof(struct cpu_park_section);
+ park = (void *)__pa_symbol(enter_cpu_park);
+
+ cpu_install_idmap();
+ park(do_park, park_exit_phy);
+ unreachable();
+}
+#endif
/*
* Boot a secondary CPU, and assign it the specified idle task.
@@ -102,6 +265,10 @@ static int boot_secondary(unsigned int cpu, struct task_struct *idle)
{
const struct cpu_operations *ops = get_cpu_ops(cpu);
+#ifdef CONFIG_ARM64_CPU_PARK
+ if (write_park_exit(cpu) == 0)
+ return 0;
+#endif
if (ops->cpu_boot)
return ops->cpu_boot(cpu);
@@ -131,6 +298,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+#ifdef CONFIG_ARM64_CPU_PARK
+ uninstall_cpu_park(cpu);
+#endif
/*
* CPU was successfully started, wait for it to come online or
* time out.
@@ -844,10 +1014,30 @@ void arch_irq_work_raise(void)
static void local_cpu_stop(void)
{
+ int cpu;
+ const struct cpu_operations *ops = NULL;
+
set_cpu_online(smp_processor_id(), false);
local_daif_mask();
sdei_mask_local_cpu();
+
+#ifdef CONFIG_ARM64_CPU_PARK
+ /*
+ * Go to cpu park state.
+ * Otherwise go to cpu die.
+ */
+ cpu = smp_processor_id();
+ if (kexec_in_progress && park_info.start_v) {
+ machine_kexec_mask_interrupts();
+ cpu_park(cpu);
+
+ ops = get_cpu_ops(cpu);
+ if (ops && ops->cpu_die)
+ ops->cpu_die(cpu);
+ }
+#endif
+
cpu_park_loop();
}
@@ -1053,6 +1243,45 @@ void smp_send_stop(void)
sdei_mask_local_cpu();
}
+#ifdef CONFIG_ARM64_CPU_PARK
+int kexec_smp_send_park(void)
+{
+ unsigned long cpu;
+
+ if (WARN_ON(!kexec_in_progress)) {
+ pr_crit("%s called not in kexec progress.", __func__);
+ return -EPERM;
+ }
+
+ if (mmap_cpu_park_mem() != 0) {
+ pr_info("no cpuparkmem, goto normal way.");
+ return -EPERM;
+ }
+
+ local_irq_disable();
+
+ if (num_online_cpus() > 1) {
+ cpumask_t mask;
+
+ cpumask_copy(&mask, cpu_online_mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
+
+ for_each_cpu(cpu, &mask)
+ install_cpu_park(cpu);
+ smp_cross_call(&mask, IPI_CPU_STOP);
+
+ /* Wait for other CPUs to park */
+ for_each_cpu(cpu, &mask)
+ cpu_wait_park(cpu);
+ pr_info("smp park other cpus done\n");
+ }
+
+ sdei_mask_local_cpu();
+
+ return 0;
+}
+#endif
+
#ifdef CONFIG_KEXEC_CORE
void crash_smp_send_stop(void)
{
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 794f992..d01259c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -236,6 +236,57 @@ static void __init fdt_enforce_memory_region(void)
memblock_add(usable_rgns[1].base, usable_rgns[1].size);
}
+#ifdef CONFIG_ARM64_CPU_PARK
+struct cpu_park_info park_info = {
+ .start = 0,
+ .len = PARK_SECTION_SIZE * NR_CPUS,
+ .start_v = 0,
+};
+
+static int __init parse_park_mem(char *p)
+{
+ if (!p)
+ return 0;
+
+ park_info.start = PAGE_ALIGN(memparse(p, NULL));
+ if (park_info.start == 0)
+ pr_info("cpu park mem params[%s]", p);
+
+ return 0;
+}
+early_param("cpuparkmem", parse_park_mem);
+
+static int __init reserve_park_mem(void)
+{
+ if (park_info.start == 0 || park_info.len == 0)
+ return 0;
+
+ park_info.start = PAGE_ALIGN(park_info.start);
+ park_info.len = PAGE_ALIGN(park_info.len);
+
+ if (!memblock_is_region_memory(park_info.start, park_info.len)) {
+ pr_warn("cannot reserve park mem: region is not memory!");
+ goto out;
+ }
+
+ if (memblock_is_region_reserved(park_info.start, park_info.len)) {
+ pr_warn("cannot reserve park mem: region overlaps reserved memory!");
+ goto out;
+ }
+
+ memblock_remove(park_info.start, park_info.len);
+ pr_info("cpu park mem reserved: 0x%016lx - 0x%016lx (%ld MB)",
+ park_info.start, park_info.start + park_info.len,
+ park_info.len >> 20);
+
+ return 0;
+out:
+ park_info.start = 0;
+ park_info.len = 0;
+ return -EINVAL;
+}
+#endif
+
void __init arm64_memblock_init(void)
{
const s64 linear_region_size = BIT(vabits_actual - 1);
@@ -357,6 +408,10 @@ void __init arm64_memblock_init(void)
reserve_crashkernel();
+#ifdef CONFIG_ARM64_CPU_PARK
+ reserve_park_mem();
+#endif
+
reserve_elfcorehdr();
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
--
2.9.5
2
3

22 Feb '21
From: Sang Yan <sangyan(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 48159
CVE: N/A
In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to
copy all segments from vmalloced memory to kernel boot memory,
because of disabled mmu.
We introduce quick kexec to save time of copying memory as above,
just like kdump(kexec on crash), by using reserved memory
"Quick Kexec".
Constructing quick kimage as the same as crash kernel,
then simply copy all segments of kimage to reserved memroy.
We also add this support in syscall kexec_load using flags
of KEXEC_QUICK.
Signed-off-by: Sang Yan <sangyan(a)huawei.com>
---
arch/Kconfig | 10 ++++++++++
include/linux/ioport.h | 1 +
include/linux/kexec.h | 11 ++++++++++-
include/uapi/linux/kexec.h | 1 +
kernel/kexec.c | 10 ++++++++++
kernel/kexec_core.c | 42 +++++++++++++++++++++++++++++++++---------
6 files changed, 65 insertions(+), 10 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 2592b4b..7811eee 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -18,6 +18,16 @@ config KEXEC_CORE
select CRASH_CORE
bool
+config QUICK_KEXEC
+ bool "Support for quick kexec"
+ depends on KEXEC_CORE
+ help
+ It uses pre-reserved memory to accelerate kexec, just like
+ crash kexec, loads new kernel and initrd to reserved memory,
+ and boots new kernel on that memory. It will save the time
+ of relocating kernel.
+
+
config KEXEC_ELF
bool
diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 5135d4b..84a716f 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -139,6 +139,7 @@ enum {
IORES_DESC_DEVICE_PRIVATE_MEMORY = 6,
IORES_DESC_RESERVED = 7,
IORES_DESC_SOFT_RESERVED = 8,
+ IORES_DESC_QUICK_KEXEC = 9,
};
/*
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index f301f2f..7fff410 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -269,9 +269,10 @@ struct kimage {
unsigned long control_page;
/* Flags to indicate special processing */
- unsigned int type : 1;
+ unsigned int type : 2;
#define KEXEC_TYPE_DEFAULT 0
#define KEXEC_TYPE_CRASH 1
+#define KEXEC_TYPE_QUICK 2
unsigned int preserve_context : 1;
/* If set, we are using file mode kexec syscall */
unsigned int file_mode:1;
@@ -331,6 +332,11 @@ extern int kexec_load_disabled;
#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT)
#endif
+#ifdef CONFIG_QUICK_KEXEC
+#undef KEXEC_FLAGS
+#define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_QUICK)
+#endif
+
/* List of defined/legal kexec file flags */
#define KEXEC_FILE_FLAGS (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
KEXEC_FILE_NO_INITRAMFS)
@@ -338,6 +344,9 @@ extern int kexec_load_disabled;
/* Location of a reserved region to hold the crash kernel.
*/
extern note_buf_t __percpu *crash_notes;
+#ifdef CONFIG_QUICK_KEXEC
+extern struct resource quick_kexec_res;
+#endif
/* flag to track if kexec reboot is in progress */
extern bool kexec_in_progress;
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 05669c8..d891d80 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -12,6 +12,7 @@
/* kexec flags for different usage scenarios */
#define KEXEC_ON_CRASH 0x00000001
#define KEXEC_PRESERVE_CONTEXT 0x00000002
+#define KEXEC_QUICK 0x00000004
#define KEXEC_ARCH_MASK 0xffff0000
/*
diff --git a/kernel/kexec.c b/kernel/kexec.c
index c82c6c0..4acc909 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -44,6 +44,9 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
int ret;
struct kimage *image;
bool kexec_on_panic = flags & KEXEC_ON_CRASH;
+#ifdef CONFIG_QUICK_KEXEC
+ bool kexec_on_quick = flags & KEXEC_QUICK;
+#endif
if (kexec_on_panic) {
/* Verify we have a valid entry point */
@@ -69,6 +72,13 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry,
image->type = KEXEC_TYPE_CRASH;
}
+#ifdef CONFIG_QUICK_KEXEC
+ if (kexec_on_quick) {
+ image->control_page = quick_kexec_res.start;
+ image->type = KEXEC_TYPE_QUICK;
+ }
+#endif
+
ret = sanity_check_segment_list(image);
if (ret)
goto out_free_image;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 2ca8875..c7e2aa2 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -53,6 +53,17 @@ note_buf_t __percpu *crash_notes;
/* Flag to indicate we are going to kexec a new kernel */
bool kexec_in_progress = false;
+/* Resource for quick kexec */
+#ifdef CONFIG_QUICK_KEXEC
+struct resource quick_kexec_res = {
+ .name = "Quick kexec",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+ .desc = IORES_DESC_QUICK_KEXEC
+};
+#endif
+
int kexec_should_crash(struct task_struct *p)
{
/*
@@ -396,8 +407,9 @@ static struct page *kimage_alloc_normal_control_pages(struct kimage *image,
return pages;
}
-static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
- unsigned int order)
+static struct page *kimage_alloc_special_control_pages(struct kimage *image,
+ unsigned int order,
+ unsigned long end)
{
/* Control pages are special, they are the intermediaries
* that are needed while we copy the rest of the pages
@@ -427,7 +439,7 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
size = (1 << order) << PAGE_SHIFT;
hole_start = (image->control_page + (size - 1)) & ~(size - 1);
hole_end = hole_start + size - 1;
- while (hole_end <= crashk_res.end) {
+ while (hole_end <= end) {
unsigned long i;
cond_resched();
@@ -462,7 +474,6 @@ static struct page *kimage_alloc_crash_control_pages(struct kimage *image,
return pages;
}
-
struct page *kimage_alloc_control_pages(struct kimage *image,
unsigned int order)
{
@@ -473,8 +484,15 @@ struct page *kimage_alloc_control_pages(struct kimage *image,
pages = kimage_alloc_normal_control_pages(image, order);
break;
case KEXEC_TYPE_CRASH:
- pages = kimage_alloc_crash_control_pages(image, order);
+ pages = kimage_alloc_special_control_pages(image, order,
+ crashk_res.end);
+ break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ pages = kimage_alloc_special_control_pages(image, order,
+ quick_kexec_res.end);
break;
+#endif
}
return pages;
@@ -830,11 +848,12 @@ static int kimage_load_normal_segment(struct kimage *image,
return result;
}
-static int kimage_load_crash_segment(struct kimage *image,
+static int kimage_load_special_segment(struct kimage *image,
struct kexec_segment *segment)
{
- /* For crash dumps kernels we simply copy the data from
- * user space to it's destination.
+ /*
+ * For crash dumps kernels and quick kexec kernels
+ * we simply copy the data from user space to it's destination.
* We do things a page at a time for the sake of kmap.
*/
unsigned long maddr;
@@ -908,8 +927,13 @@ int kimage_load_segment(struct kimage *image,
result = kimage_load_normal_segment(image, segment);
break;
case KEXEC_TYPE_CRASH:
- result = kimage_load_crash_segment(image, segment);
+ result = kimage_load_special_segment(image, segment);
break;
+#ifdef CONFIG_QUICK_KEXEC
+ case KEXEC_TYPE_QUICK:
+ result = kimage_load_special_segment(image, segment);
+ break;
+#endif
}
return result;
--
2.9.5
2
3
bugfix for 20.03 @ 2021/02/22
Aichun Li (7):
netpoll: remove dev argument from netpoll_send_skb_on_dev()
netpoll: move netpoll_send_skb() out of line
netpoll: netpoll_send_skb() returns transmit status
netpoll: accept NULL np argument in netpoll_send_skb()
bonding: add an option to specify a delay between peer notifications
bonding: fix value exported by Netlink for peer_notif_delay
bonding: add documentation for peer_notif_delay
Akilesh Kailash (1):
dm snapshot: flush merged data before committing metadata
Al Viro (2):
don't dump the threads that had been already exiting when zapped.
dump_common_audit_data(): fix racy accesses to ->d_name
Aleksandr Nogikh (1):
netem: fix zero division in tabledist
Alexander Duyck (1):
tcp: Set INET_ECN_xmit configuration in tcp_reinit_congestion_control
Alexander Lobakin (1):
skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too
Alexey Dobriyan (2):
proc: change ->nlink under proc_subdir_lock
proc: fix lookup in /proc/net subdirectories after setns(2)
Alexey Kardashevskiy (1):
serial_core: Check for port state when tty is in error state
Amit Cohen (1):
mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
Andy Shevchenko (2):
device property: Keep secondary firmware node secondary by type
device property: Don't clear secondary pointer for shared primary
firmware node
Antoine Tenart (6):
netfilter: bridge: reset skb->pkt_type after NF_INET_POST_ROUTING
traversal
net: ip6_gre: set dev->hard_header_len when using header_ops
net-sysfs: take the rtnl lock when storing xps_cpus
net-sysfs: take the rtnl lock when accessing xps_cpus_map and num_tc
net-sysfs: take the rtnl lock when storing xps_rxqs
net-sysfs: take the rtnl lock when accessing xps_rxqs_map and num_tc
Ard Biesheuvel (3):
efivarfs: revert "fix memory leak in efivarfs_create()"
arm64, mm, efi: Account for GICv3 LPI tables in static memblock
reserve table
efi/arm: Revert "Defer persistent reservations until after
paging_init()"
Arjun Roy (1):
tcp: Prevent low rmem stalls with SO_RCVLOWAT.
Arnaldo Carvalho de Melo (1):
perf scripting python: Avoid declaring function pointers with a
visibility attribute
Axel Lin (1):
ASoC: msm8916-wcd-digital: Select REGMAP_MMIO to fix build error
Aya Levin (1):
net: ipv6: Validate GSO SKB before finish IPv6 processing
Bart Van Assche (1):
scsi: scsi_transport_spi: Set RQF_PM for domain validation commands
Bharat Gooty (1):
PCI: iproc: Fix out-of-bound array accesses
Bixuan Cui (2):
mmap: fix a compiling error for 'MAP_CHECKNODE'
powerpc: fix a compiling error for 'access_ok'
Bjorn Helgaas (1):
PCI: Bounds-check command-line resource alignment requests
Björn Töpel (1):
ixgbe: avoid premature Rx buffer reuse
Boqun Feng (1):
fcntl: Fix potential deadlock in send_sig{io, urg}()
Boris Protopopov (1):
Convert trailing spaces and periods in path components
Brian Foster (1):
xfs: flush new eof page on truncate to avoid post-eof corruption
Calum Mackay (1):
lockd: don't use interval-based rebinding over TCP
Chen Zhou (1):
selinux: Fix error return code in sel_ib_pkey_sid_slow()
Chenguangli (1):
scsi/hifc:Fix the bug that the system may be oops during unintall hifc
module.
Cheng Lin (1):
nfs_common: need lock during iterate through the list
Christoph Hellwig (2):
nbd: fix a block_device refcount leak in nbd_release
xfs: fix a missing unlock on error in xfs_fs_map_blocks
Chunguang Xu (1):
ext4: fix a memory leak of ext4_free_data
Chunyan Zhang (1):
tick/common: Touch watchdog in tick_unfreeze() on all CPUs
Colin Ian King (1):
PCI: Fix overflow in command-line resource alignment requests
Cong Wang (1):
erspan: fix version 1 check in gre_parse_header()
Damien Le Moal (1):
null_blk: Fix zone size initialization
Dan Carpenter (1):
futex: Don't enable IRQs unconditionally in put_pi_state()
Daniel Scally (1):
Revert "ACPI / resources: Use AE_CTRL_TERMINATE to terminate resources
walks"
Darrick J. Wong (12):
xfs: fix realtime bitmap/summary file truncation when growing rt
volume
xfs: don't free rt blocks when we're doing a REMAP bunmapi call
xfs: set xefi_discard when creating a deferred agfl free log intent
item
xfs: fix scrub flagging rtinherit even if there is no rt device
xfs: fix flags argument to rmap lookup when converting shared file
rmaps
xfs: set the unwritten bit in rmap lookup flags in
xchk_bmap_get_rmapextents
xfs: fix rmap key and record comparison functions
xfs: fix brainos in the refcount scrubber's rmap fragment processor
vfs: remove lockdep bogosity in __sb_start_write
xfs: fix the minrecs logic when dealing with inode root child blocks
xfs: strengthen rmap record flags checking
xfs: revert "xfs: fix rmap key and record comparison functions"
Dave Wysochanski (1):
NFS4: Fix use-after-free in trace_event_raw_event_nfs4_set_lock
Dexuan Cui (1):
ACPI: scan: Harden acpi_device_add() against device ID overflows
Dinghao Liu (4):
ext4: fix error handling code in add_new_gdb
net/mlx5e: Fix memleak in mlx5e_create_l2_table_groups
net/mlx5e: Fix two double free cases
netfilter: nf_nat: Fix memleak in nf_nat_init
Dongdong Wang (1):
lwt: Disable BH too in run_lwt_bpf()
Dongli Zhang (1):
page_frag: Recover from memory pressure
Douglas Gilbert (1):
sgl_alloc_order: fix memory leak
Eddy Wu (1):
fork: fix copy_process(CLONE_PARENT) race with the exiting
->real_parent
Eran Ben Elisha (1):
net/mlx5: Fix wrong address reclaim when command interface is down
Eric Auger (1):
vfio/pci: Move dummy_resources_list init in vfio_pci_probe()
Eric Biggers (1):
ext4: fix leaking sysfs kobject after failed mount
Eric Dumazet (4):
tcp: select sane initial rcvq_space.space for big MSS
net: avoid 32 x truesize under-estimation for tiny skbs
net_sched: avoid shift-out-of-bounds in tcindex_set_parms()
net_sched: reject silly cell_log in qdisc_get_rtab()
Fang Lijun (3):
arm64/ascend: mm: Add MAP_CHECKNODE flag to check node hugetlb
arm64/ascend: mm: Fix arm32 compile warnings
arm64/ascend: mm: Fix hugetlb check node error
Fangrui Song (1):
arm64: Change .weak to SYM_FUNC_START_WEAK_PI for
arch/arm64/lib/mem*.S
Florian Fainelli (1):
net: Have netpoll bring-up DSA management interface
Florian Westphal (4):
netfilter: nf_tables: avoid false-postive lockdep splat
netfilter: xt_RATEEST: reject non-null terminated string from
userspace
net: ip: always refragment ip defragmented packets
net: fix pmtu check in nopmtudisc mode
Gabriel Krisman Bertazi (2):
blk-cgroup: Fix memleak on error path
blk-cgroup: Pre-allocate tree node on blkg_conf_prep
George Spelvin (1):
random32: make prandom_u32() output unpredictable
Gerald Schaefer (1):
mm/userfaultfd: do not access vma->vm_mm after calling
handle_userfault()
Guillaume Nault (4):
ipv4: Fix tos mask in inet_rtm_getroute()
ipv4: Ignore ECN bits for fib lookups in fib_compute_spec_dst()
netfilter: rpfilter: mask ecn bits before fib lookup
udp: mask TOS bits in udp_v4_early_demux()
Hanjun Guo (1):
clocksource/drivers/arch_timer: Fix vdso_fix compile error for arm32
Hannes Reinecke (1):
dm: avoid filesystem lookup in dm_get_dev_t()
Hans de Goede (1):
ACPI: scan: Make acpi_bus_get_device() clear return pointer on error
Heiner Kallweit (1):
net: bridge: add missing counters to ndo_get_stats64 callback
Hoang Le (1):
tipc: fix NULL deref in tipc_link_xmit()
Huang Shijie (1):
lib/genalloc: fix the overflow when size is too big
Huang Ying (1):
mm: fix a race during THP splitting
Hugh Dickins (2):
mlock: fix unevictable_pgs event counts on THP
mm: fix check_move_unevictable_pages() on THP
Hui Wang (1):
ACPI: PNP: compare the string length in the matching_id()
Hyeongseok Kim (1):
dm verity: skip verity work if I/O error when system is shutting down
Ido Schimmel (2):
mlxsw: core: Fix memory leak on module removal
mlxsw: core: Use variable timeout for EMAD retries
Ilya Dryomov (1):
libceph: clear con->out_msg on Policy::stateful_server faults
Jakub Kicinski (1):
net: vlan: avoid leaks on register_vlan_dev() failures
Jamie Iles (1):
bonding: wait for sysfs kobject destruction before freeing struct
slave
Jan Kara (8):
ext4: Detect already used quota file early
ext4: fix bogus warning in ext4_update_dx_flag()
ext4: Protect superblock modifications with a buffer lock
ext4: fix deadlock with fs freezing and EA inodes
ext4: don't remount read-only with errors=continue on reboot
quota: Don't overflow quota file offsets
bfq: Fix computation of shallow depth
ext4: fix superblock checksum failure when setting password salt
Jann Horn (1):
mm, slub: consider rest of partial list if acquire_slab() fails
Jason A. Donenfeld (3):
netfilter: use actual socket sk rather than skb sk when routing harder
net: introduce skb_list_walk_safe for skb segment walking
net: skbuff: disambiguate argument and member for skb_list_walk_safe
helper
Jeff Dike (1):
virtio_net: Fix recursive call to cpus_read_lock()
Jens Axboe (1):
proc: don't allow async path resolution of /proc/self components
Jesper Dangaard Brouer (1):
netfilter: conntrack: fix reading nf_conntrack_buckets
Jessica Yu (1):
module: delay kobject uevent until after module init call
Jiri Olsa (2):
perf python scripting: Fix printable strings in python3 scripts
perf tools: Add missing swap for ino_generation
Johannes Thumshirn (1):
block: factor out requeue handling from dispatch code
Jonathan Cameron (1):
ACPI: Add out of bounds and numa_off protections to pxm_to_node()
Joseph Qi (1):
ext4: unlock xattr_sem properly in ext4_inline_data_truncate()
Jubin Zhong (1):
PCI: Fix pci_slot_release() NULL pointer dereference
Kaixu Xia (1):
ext4: correctly report "not supported" for {usr, grp}jquota when
!CONFIG_QUOTA
Keqian Zhu (1):
clocksource/drivers/arm_arch_timer: Correct fault programming of
CNTKCTL_EL1.EVNTI
Kirill Tkhai (1):
mm: move nr_deactivate accounting to shrink_active_list()
Lang Dai (1):
uio: free uio id after uio file node is freed
Lecopzer Chen (2):
kasan: fix unaligned address is unhandled in kasan_remove_zero_shadow
kasan: fix incorrect arguments passing in kasan_add_zero_shadow
Lee Duncan (1):
scsi: libiscsi: Fix NOP race condition
Lee Jones (1):
Fonts: Replace discarded const qualifier
Leo Yan (1):
perf lock: Don't free "lock_seq_stat" if read_count isn't zero
Leon Romanovsky (1):
net/mlx5: Properly convey driver version to firmware
Lijie (1):
config: enable CONFIG_NVME_MULTIPATH by default
Liu Shixin (3):
config: set default value of CONFIG_TEST_FREE_PAGES
mm: memcontrol: add struct mem_cgroup_extension
mm: fix kabi broken
Lorenzo Pieralisi (1):
asm-generic/io.h: Fix !CONFIG_GENERIC_IOMAP pci_iounmap()
implementation
Lu Jialin (1):
fs: fix files.usage bug when move tasks
Luc Van Oostenryck (1):
xsk: Fix xsk_poll()'s return type
Luo Meng (2):
ext4: fix invalid inode checksum
fail_function: Remove a redundant mutex unlock
Mao Wenan (1):
net: Update window_clamp if SOCK_RCVBUF is set
Marc Zyngier (2):
arm64: Run ARCH_WORKAROUND_1 enabling code on all CPUs
genirq/irqdomain: Don't try to free an interrupt that has no mapping
Mark Rutland (3):
arm64: syscall: exit userspace before unmasking exceptions
arm64: module: rework special section handling
arm64: module/ftrace: intialize PLT at load time
Martin Wilck (1):
scsi: core: Fix VPD LUN ID designator priorities
Mateusz Nosek (1):
futex: Fix incorrect should_fail_futex() handling
Matteo Croce (4):
Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint"
reboot: fix overflow parsing reboot cpu number
ipv6: create multicast route with RTPROT_KERNEL
ipv6: set multicast flag on the multicast route
Matthew Wilcox (Oracle) (1):
mm/page_alloc.c: fix freeing non-compound pages
Maurizio Lombardi (2):
scsi: target: remove boilerplate code
scsi: target: fix hang when multiple threads try to destroy the same
iscsi session
Maxim Mikityanskiy (1):
net/tls: Protect from calling tls_dev_del for TLS RX twice
Miaohe Lin (1):
mm/hugetlb: fix potential missing huge page size info
Michael Schaller (1):
efivarfs: Replace invalid slashes with exclamation marks in dentries.
Mike Christie (1):
scsi: target: iscsi: Fix cmd abort fabric stop race
Mike Galbraith (1):
futex: Handle transient "ownerless" rtmutex state correctly
Miklos Szeredi (1):
fuse: fix page dereference after free
Mikulas Patocka (3):
dm integrity: fix the maximum number of arguments
dm integrity: fix flush with external metadata device
dm integrity: fix a crash if "recalculate" used without
"internal_hash"
Ming Lei (2):
scsi: core: Don't start concurrent async scan on same host
block: fix use-after-free in disk_part_iter_next
Minwoo Im (1):
nvme: free sq/cq dbbuf pointers when dbbuf set fails
Miroslav Benes (1):
module: set MODULE_STATE_GOING state when a module fails to load
Moshe Shemesh (2):
net/mlx4_en: Avoid scheduling restart task if it is already running
net/mlx4_en: Handle TX error CQE
Naoya Horiguchi (1):
mm, hwpoison: double-check page count in __get_any_page()
Naveen N. Rao (1):
ftrace: Fix updating FTRACE_FL_TRAMP
Neal Cardwell (1):
tcp: fix cwnd-limited bug for TSO deferral where we send nothing
NeilBrown (1):
NFS: switch nfsiod to be an UNBOUND workqueue.
Nicholas Piggin (1):
mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
Oleg Nesterov (1):
ptrace: fix task_join_group_stop() for the case when current is traced
Oliver Herms (1):
IPv6: Set SIT tunnel hard_header_len to zero
Paul Moore (1):
selinux: fix inode_doinit_with_dentry() LABEL_INVALID error handling
Paulo Alcantara (1):
cifs: fix potential use-after-free in cifs_echo_request()
Peng Liu (1):
sched/deadline: Fix sched_dl_global_validate()
Peter Zijlstra (2):
serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt
perf: Fix get_recursion_context()
Petr Malat (1):
sctp: Fix COMM_LOST/CANT_STR_ASSOC err reporting on big-endian
platforms
Qian Cai (1):
mm/swapfile: do not sleep with a spin lock held
Qiujun Huang (2):
ring-buffer: Return 0 on success from ring_buffer_resize()
tracing: Fix out of bounds write in get_trace_buf
Rafael J. Wysocki (1):
driver core: Extend device_is_dependent()
Randy Dunlap (1):
net: sched: prevent invalid Scell_log shift count
Ritika Srivastava (2):
block: Return blk_status_t instead of errno codes
block: better deal with the delayed not supported case in
blk_cloned_rq_check_limits
Ronnie Sahlberg (1):
cifs: handle -EINTR in cifs_setattr
Ryan Sharpelletti (1):
tcp: only postpone PROBE_RTT if RTT is < current min_rtt estimate
Sami Tolvanen (1):
arm64: lse: fix LSE atomics with LLVM's integrated assembler
Sean Tranchetti (1):
net: ipv6: fib: flush exceptions when purging route
Shakeel Butt (2):
mm: swap: fix vmstats for huge pages
mm: swap: memcg: fix memcg stats for huge pages
Shijie Luo (1):
mm: mempolicy: fix potential pte_unmap_unlock pte error
Shin'ichiro Kawasaki (1):
uio: Fix use-after-free in uio_unregister_device()
Stefano Brivio (1):
netfilter: ipset: Update byte and packet counters regardless of
whether they match
Steven Rostedt (VMware) (4):
ring-buffer: Fix recursion protection transitions between interrupt
context
ftrace: Fix recursion check for NMI test
ftrace: Handle tracing when switching between context
tracing: Fix userstacktrace option for instances
Subash Abhinov Kasiviswanathan (2):
netfilter: x_tables: Switch synchronization to RCU
netfilter: x_tables: Update remaining dereference to RCU
Sven Eckelmann (2):
vxlan: Add needed_headroom for lower device
vxlan: Copy needed_tailroom from lowerdev
Sylwester Dziedziuch (2):
i40e: Fix removing driver while bare-metal VFs pass traffic
i40e: Fix Error I40E_AQ_RC_EINVAL when removing VFs
Takashi Iwai (1):
libata: transport: Use scnprintf() for avoiding potential buffer
overflow
Tariq Toukan (1):
net: Disable NETIF_F_HW_TLS_RX when RXCSUM is disabled
Thomas Gleixner (12):
sched: Reenable interrupts in do_sched_yield()
futex: Move futex exit handling into futex code
futex: Replace PF_EXITPIDONE with a state
exit/exec: Seperate mm_release()
futex: Split futex_mm_release() for exit/exec
futex: Set task::futex_state to DEAD right after handling futex exit
futex: Mark the begin of futex exit explicitly
futex: Sanitize exit state handling
futex: Provide state handling for exec() as well
futex: Add mutex around futex exit
futex: Provide distinct return value when owner is exiting
futex: Prevent exit livelock
Tianyue Ren (1):
selinux: fix error initialization in inode_doinit_with_dentry()
Trond Myklebust (5):
SUNRPC: xprt_load_transport() needs to support the netid "rdma6"
NFSv4: Fix a pNFS layout related use-after-free race when freeing the
inode
pNFS: Mark layout for return if return-on-close was not sent
NFS/pNFS: Fix a leak of the layout 'plh_outstanding' counter
NFS: nfs_igrab_and_active must first reference the superblock
Tung Nguyen (1):
tipc: fix memory leak caused by tipc_buf_append()
Tyler Hicks (1):
tpm: efi: Don't create binary_bios_measurements file for an empty log
Uwe Kleine-König (1):
spi: fix resource leak for drivers without .remove callback
Vadim Fedorenko (1):
net/tls: missing received data after fast remote close
Valentin Schneider (1):
arm64: topology: Stop using MPIDR for topology information
Vamshi K Sthambamkadi (1):
efivarfs: fix memory leak in efivarfs_create()
Vasily Averin (2):
netfilter: ipset: fix shift-out-of-bounds in htable_bits()
net: drop bogus skb with CHECKSUM_PARTIAL and offset beyond end of
trimmed packet
Vincenzo Frascino (1):
arm64: lse: Fix LSE atomics with LLVM
Vladyslav Tarasiuk (1):
net/mlx5: Disable QoS when min_rates on all VFs are zero
Wang Hai (4):
devlink: Add missing genlmsg_cancel() in
devlink_nl_sb_port_pool_fill()
inet_diag: Fix error path to cancel the meseage in
inet_req_diag_fill()
tipc: fix memory leak in tipc_topsrv_start()
ipv6: addrlabel: fix possible memory leak in ip6addrlbl_net_init
Wang Wensheng (1):
sbsa_gwdt: Add WDIOF_PRETIMEOUT flag to watchdog_info at defination
Wei Li (1):
irqchip/gic-v3: Fix compiling error on ARM32 with GICv3
Wei Yang (1):
mm: thp: don't need care deferred split queue in memcg charge move
path
Weilong Chen (1):
hugetlbfs: Add dependency with ascend memory features
Wengang Wang (1):
ocfs2: initialize ip_next_orphan
Will Deacon (3):
arm64: psci: Avoid printing in cpu_psci_cpu_die()
arm64: pgtable: Fix pte_accessible()
arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()
Willem de Bruijn (1):
sock: set sk_err to ee_errno on dequeue from errq
Wu Bo (1):
scsi: libiscsi: fix task hung when iscsid deamon exited
Xie XiuQi (1):
cputime: fix undefined reference to get_idle_time when CONFIG_PROC_FS
disabled
Xin Long (1):
sctp: change to hold/put transport for proto_unreach_timer
Xiongfeng Wang (1):
arm64: fix compile error when CONFIG_HOTPLUG_CPU is disabled
Xiubo Li (1):
nbd: make the config put is called before the notifying the waiter
Xu Qiang (4):
NMI: Enable arm-pmu interrupt as NMI in Acensed.
irqchip/gic-v3-its: Unconditionally save/restore the ITS state on
suspend.
irqchip/irq-gic-v3: Add workaround bindings in device tree to init ts
core GICR.
Document: In the binding document, add enable-init-all-GICR field
description.
Yang Shi (6):
mm: list_lru: set shrinker map bit when child nr_items is not zero
mm: thp: extract split_queue_* into a struct
mm: move mem_cgroup_uncharge out of __page_cache_release()
mm: shrinker: make shrinker not depend on memcg kmem
mm: thp: make deferred split shrinker memcg aware
mm: vmscan: protect shrinker idr replace with CONFIG_MEMCG
Yang Yingliang (5):
arm64: arch_timer: only do cntvct workaround on VDSO path on D05
armv7 fix compile error
Kconfig: disable KTASK by default
futex: sched: fix kabi broken in task_struct
futex: sched: fix UAF when free futex_exit_mutex in free_task()
Yi-Hung Wei (1):
ip_tunnels: Set tunnel option flag when tunnel metadata is present
Yicong Yang (1):
libfs: fix error cast of negative value in simple_attr_write()
Yu Kuai (2):
blk-cgroup: prevent rcu_sched detected stalls warnings in
blkg_destroy_all()
blk-throttle: don't check whether or not lower limit is valid if
CONFIG_BLK_DEV_THROTTLING_LOW is off
Yufen Yu (2):
bdi: fix compiler error in bdi_get_dev_name()
scsi: do quiesce for enclosure driver
Yunfeng Ye (1):
workqueue: Kick a worker based on the actual activation of delayed
works
Yunjian Wang (2):
net: hns: fix return value check in __lb_other_process()
vhost_net: fix ubuf refcount incorrectly when sendmsg fails
Yunsheng Lin (1):
net: sch_generic: fix the missing new qdisc assignment bug
Zeng Tao (1):
time: Prevent undefined behaviour in timespec64_to_ns()
Zhang Changzhong (2):
ah6: fix error return code in ah6_input()
net: bridge: vlan: fix error return code in __vlan_add()
Zhao Heming (2):
md/cluster: block reshape with remote resync job
md/cluster: fix deadlock when node is doing resync job
Zhengyuan Liu (3):
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
hifc: remove unnecessary __init specifier
mmap: fix a compiling error for 'MAP_PA32BIT'
Zhou Guanghui (2):
memcg/ascend: Check sysctl oom config for memcg oom
memcg/ascend: enable kmem cgroup by default for ascend
Zqiang (1):
kthread_worker: prevent queuing delayed work from timer_fn when it is
being canceled
chenmaodong (1):
fix virtio_gpu use-after-free while creating dumb
j.nixdorf(a)avm.de (1):
net: sunrpc: interpret the return value of kstrtou32 correctly
lijinlin (1):
ext4: add ext3 report error to userspace by netlink
miaoyubo (1):
KVM: Enable PUD huge mappings only on 1620
yangerkun (1):
ext4: fix bug for rename with RENAME_WHITEOUT
zhuoliang zhang (1):
net: xfrm: fix a race condition during allocing spi
.../interrupt-controller/arm,gic-v3.txt | 4 +
Documentation/networking/bonding.txt | 16 +-
arch/Kconfig | 7 +
arch/alpha/include/uapi/asm/mman.h | 2 +
arch/arm/kernel/perf_event_v7.c | 16 +-
arch/arm64/Kconfig | 2 +
arch/arm64/configs/euleros_defconfig | 1 +
arch/arm64/configs/hulk_defconfig | 1 +
arch/arm64/configs/openeuler_defconfig | 3 +-
arch/arm64/include/asm/atomic_lse.h | 76 ++-
arch/arm64/include/asm/cpufeature.h | 4 +
arch/arm64/include/asm/lse.h | 6 +-
arch/arm64/include/asm/memory.h | 11 +
arch/arm64/include/asm/numa.h | 3 +
arch/arm64/include/asm/pgtable.h | 34 +-
arch/arm64/kernel/cpu_errata.c | 8 +
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/ftrace.c | 50 +-
arch/arm64/kernel/module.c | 47 +-
arch/arm64/kernel/psci.c | 5 +-
arch/arm64/kernel/setup.c | 5 +-
arch/arm64/kernel/syscall.c | 2 +-
arch/arm64/kernel/topology.c | 43 +-
arch/arm64/lib/memcpy.S | 3 +-
arch/arm64/lib/memmove.S | 3 +-
arch/arm64/lib/memset.S | 3 +-
arch/arm64/mm/init.c | 17 +-
arch/arm64/mm/numa.c | 6 +-
arch/mips/include/uapi/asm/mman.h | 2 +
arch/parisc/include/uapi/asm/mman.h | 2 +
arch/powerpc/include/asm/uaccess.h | 4 +-
arch/powerpc/include/uapi/asm/mman.h | 2 +
arch/sparc/include/uapi/asm/mman.h | 2 +
arch/x86/configs/hulk_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
arch/xtensa/include/uapi/asm/mman.h | 2 +
block/bfq-iosched.c | 8 +-
block/blk-cgroup.c | 30 +-
block/blk-core.c | 39 +-
block/blk-mq.c | 29 +-
block/blk-throttle.c | 6 +
block/genhd.c | 9 +-
drivers/acpi/acpi_pnp.c | 3 +
drivers/acpi/internal.h | 2 +-
drivers/acpi/numa.c | 2 +-
drivers/acpi/resource.c | 2 +-
drivers/acpi/scan.c | 17 +-
drivers/ata/libata-transport.c | 10 +-
drivers/base/core.c | 21 +-
drivers/block/nbd.c | 3 +-
drivers/block/null_blk_zoned.c | 20 +-
drivers/char/Kconfig | 2 +-
drivers/char/random.c | 1 -
drivers/char/tpm/eventlog/efi.c | 5 +
drivers/clocksource/arm_arch_timer.c | 36 +-
drivers/firmware/efi/efi.c | 4 -
drivers/firmware/efi/libstub/arm-stub.c | 3 -
drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +-
drivers/irqchip/irq-gic-v3-its.c | 30 +-
drivers/irqchip/irq-gic-v3.c | 18 +-
drivers/md/dm-bufio.c | 6 +
drivers/md/dm-integrity.c | 58 ++-
drivers/md/dm-snap.c | 24 +
drivers/md/dm-table.c | 15 +-
drivers/md/dm-verity-target.c | 12 +-
drivers/md/md-cluster.c | 67 +--
drivers/md/md.c | 14 +-
drivers/mtd/hisilicon/sfc/hrd_sfc_driver.c | 4 +-
drivers/net/bonding/bond_main.c | 92 ++--
drivers/net/bonding/bond_netlink.c | 14 +
drivers/net/bonding/bond_options.c | 71 ++-
drivers/net/bonding/bond_procfs.c | 2 +
drivers/net/bonding/bond_sysfs.c | 13 +
drivers/net/bonding/bond_sysfs_slave.c | 18 +-
.../net/ethernet/hisilicon/hns/hns_ethtool.c | 4 +
drivers/net/ethernet/intel/i40e/i40e.h | 4 +
drivers/net/ethernet/intel/i40e/i40e_main.c | 32 +-
.../ethernet/intel/i40e/i40e_virtchnl_pf.c | 30 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 24 +-
.../net/ethernet/mellanox/mlx4/en_netdev.c | 21 +-
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 40 +-
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 12 +-
.../net/ethernet/mellanox/mlx5/core/en_fs.c | 3 +
.../net/ethernet/mellanox/mlx5/core/eswitch.c | 15 +-
.../net/ethernet/mellanox/mlx5/core/main.c | 6 +-
.../ethernet/mellanox/mlx5/core/pagealloc.c | 21 +-
drivers/net/ethernet/mellanox/mlxsw/core.c | 8 +-
drivers/net/geneve.c | 3 +-
drivers/net/macvlan.c | 5 +-
drivers/net/virtio_net.c | 12 +-
drivers/net/vxlan.c | 3 +
drivers/nvme/host/pci.c | 15 +
drivers/pci/controller/pcie-iproc.c | 10 +-
drivers/pci/pci.c | 14 +-
drivers/pci/slot.c | 6 +-
drivers/scsi/huawei/hifc/unf_common.h | 2 +-
drivers/scsi/huawei/hifc/unf_scsi.c | 23 +
drivers/scsi/libiscsi.c | 33 +-
drivers/scsi/scsi_lib.c | 126 +++--
drivers/scsi/scsi_scan.c | 11 +-
drivers/scsi/scsi_transport_spi.c | 27 +-
drivers/spi/spi.c | 19 +-
drivers/target/iscsi/iscsi_target.c | 96 ++--
drivers/target/iscsi/iscsi_target.h | 1 -
drivers/target/iscsi/iscsi_target_configfs.c | 5 +-
drivers/target/iscsi/iscsi_target_login.c | 5 +-
drivers/tty/serial/amba-pl011.c | 11 +-
drivers/tty/serial/serial_core.c | 4 +
drivers/uio/uio.c | 12 +-
drivers/vfio/pci/vfio_pci.c | 3 +-
drivers/vhost/net.c | 6 +-
drivers/watchdog/sbsa_gwdt.c | 6 +-
fs/cifs/cifs_unicode.c | 8 +-
fs/cifs/connect.c | 2 +
fs/cifs/inode.c | 13 +-
fs/efivarfs/inode.c | 2 +
fs/efivarfs/super.c | 3 +
fs/exec.c | 17 +-
fs/ext4/ext4.h | 6 +-
fs/ext4/ext4_jbd2.c | 1 -
fs/ext4/file.c | 1 +
fs/ext4/inline.c | 1 +
fs/ext4/inode.c | 31 +-
fs/ext4/ioctl.c | 3 +
fs/ext4/mballoc.c | 1 +
fs/ext4/namei.c | 23 +-
fs/ext4/resize.c | 8 +-
fs/ext4/super.c | 32 +-
fs/ext4/xattr.c | 1 +
fs/fcntl.c | 10 +-
fs/filescontrol.c | 73 +--
fs/fuse/dev.c | 28 +-
fs/hugetlbfs/inode.c | 2 +-
fs/libfs.c | 6 +-
fs/lockd/host.c | 20 +-
fs/nfs/inode.c | 2 +-
fs/nfs/internal.h | 12 +-
fs/nfs/nfs4proc.c | 2 +-
fs/nfs/nfs4super.c | 2 +-
fs/nfs/pnfs.c | 40 +-
fs/nfs/pnfs.h | 5 +
fs/nfs_common/grace.c | 6 +-
fs/ocfs2/super.c | 1 +
fs/proc/generic.c | 55 ++-
fs/proc/internal.h | 7 +
fs/proc/proc_net.c | 16 -
fs/proc/self.c | 7 +
fs/quota/quota_tree.c | 8 +-
fs/super.c | 33 +-
fs/xfs/libxfs/xfs_alloc.c | 1 +
fs/xfs/libxfs/xfs_bmap.c | 19 +-
fs/xfs/libxfs/xfs_bmap.h | 2 +-
fs/xfs/libxfs/xfs_rmap.c | 2 +-
fs/xfs/scrub/bmap.c | 10 +-
fs/xfs/scrub/btree.c | 45 +-
fs/xfs/scrub/inode.c | 3 +-
fs/xfs/scrub/refcount.c | 8 +-
fs/xfs/xfs_iops.c | 10 +
fs/xfs/xfs_pnfs.c | 2 +-
fs/xfs/xfs_rtalloc.c | 10 +-
include/asm-generic/io.h | 39 +-
include/linux/backing-dev.h | 1 +
include/linux/blkdev.h | 1 +
include/linux/compat.h | 2 -
include/linux/dm-bufio.h | 1 +
include/linux/efi.h | 7 -
include/linux/futex.h | 39 +-
include/linux/huge_mm.h | 9 +
include/linux/hugetlb.h | 10 +-
include/linux/if_team.h | 5 +-
include/linux/memblock.h | 3 -
include/linux/memcontrol.h | 32 +-
include/linux/mm.h | 2 +
include/linux/mm_types.h | 1 +
include/linux/mman.h | 15 +
include/linux/mmzone.h | 8 +
include/linux/netfilter/x_tables.h | 5 +-
include/linux/netfilter_ipv4.h | 2 +-
include/linux/netfilter_ipv6.h | 2 +-
include/linux/netpoll.h | 10 +-
include/linux/prandom.h | 36 +-
include/linux/proc_fs.h | 8 +-
include/linux/sched.h | 5 +-
include/linux/sched/cputime.h | 5 +
include/linux/sched/mm.h | 6 +-
include/linux/shrinker.h | 7 +-
include/linux/skbuff.h | 5 +
include/linux/sunrpc/xprt.h | 1 +
include/linux/time64.h | 4 +
include/net/bond_options.h | 1 +
include/net/bonding.h | 14 +-
include/net/ip_tunnels.h | 7 +-
include/net/red.h | 4 +-
include/net/tls.h | 6 +
include/scsi/libiscsi.h | 3 +
include/target/iscsi/iscsi_target_core.h | 2 +-
include/uapi/asm-generic/mman.h | 1 +
include/uapi/linux/if_link.h | 1 +
init/Kconfig | 2 +-
kernel/events/internal.h | 2 +-
kernel/exit.c | 35 +-
kernel/fail_function.c | 5 +-
kernel/fork.c | 61 ++-
kernel/futex.c | 291 +++++++++--
kernel/irq/irqdomain.c | 11 +-
kernel/kthread.c | 3 +-
kernel/module.c | 6 +-
kernel/reboot.c | 28 +-
kernel/sched/core.c | 6 +-
kernel/sched/cputime.c | 6 +
kernel/sched/deadline.c | 5 +-
kernel/sched/sched.h | 42 +-
kernel/signal.c | 19 +-
kernel/time/itimer.c | 4 -
kernel/time/tick-common.c | 2 +
kernel/time/timer.c | 7 -
kernel/trace/ftrace.c | 22 +-
kernel/trace/ring_buffer.c | 66 ++-
kernel/trace/trace.c | 9 +-
kernel/trace/trace.h | 32 +-
kernel/trace/trace_selftest.c | 9 +-
kernel/workqueue.c | 13 +-
lib/Kconfig.debug | 9 +
lib/Makefile | 1 +
lib/fonts/font_10x18.c | 2 +-
lib/fonts/font_6x10.c | 2 +-
lib/fonts/font_6x11.c | 2 +-
lib/fonts/font_7x14.c | 2 +-
lib/fonts/font_8x16.c | 2 +-
lib/fonts/font_8x8.c | 2 +-
lib/fonts/font_acorn_8x8.c | 2 +-
lib/fonts/font_mini_4x6.c | 2 +-
lib/fonts/font_pearl_8x8.c | 2 +-
lib/fonts/font_sun12x22.c | 2 +-
lib/fonts/font_sun8x16.c | 2 +-
lib/genalloc.c | 25 +-
lib/random32.c | 462 +++++++++++-------
lib/scatterlist.c | 2 +-
lib/test_free_pages.c | 42 ++
mm/huge_memory.c | 167 +++++--
mm/hugetlb.c | 5 +-
mm/kasan/kasan_init.c | 23 +-
mm/list_lru.c | 10 +-
mm/memblock.c | 11 +-
mm/memcontrol.c | 34 +-
mm/memory-failure.c | 6 +
mm/mempolicy.c | 6 +-
mm/mlock.c | 25 +-
mm/mmap.c | 27 +-
mm/page_alloc.c | 9 +
mm/slub.c | 2 +-
mm/swap.c | 37 +-
mm/swapfile.c | 4 +-
mm/vmscan.c | 64 +--
net/8021q/vlan.c | 3 +-
net/8021q/vlan_dev.c | 5 +-
net/bridge/br_device.c | 1 +
net/bridge/br_netfilter_hooks.c | 7 +-
net/bridge/br_private.h | 5 +-
net/bridge/br_vlan.c | 4 +-
net/ceph/messenger.c | 5 +
net/core/dev.c | 5 +
net/core/devlink.c | 6 +-
net/core/lwt_bpf.c | 8 +-
net/core/net-sysfs.c | 65 ++-
net/core/netpoll.c | 51 +-
net/core/skbuff.c | 23 +-
net/dsa/slave.c | 5 +-
net/ipv4/fib_frontend.c | 2 +-
net/ipv4/gre_demux.c | 2 +-
net/ipv4/inet_diag.c | 4 +-
net/ipv4/ip_output.c | 2 +-
net/ipv4/ip_tunnel.c | 10 +-
net/ipv4/netfilter.c | 12 +-
net/ipv4/netfilter/arp_tables.c | 16 +-
net/ipv4/netfilter/ip_tables.c | 16 +-
net/ipv4/netfilter/ipt_SYNPROXY.c | 2 +-
net/ipv4/netfilter/ipt_rpfilter.c | 2 +-
net/ipv4/netfilter/iptable_mangle.c | 2 +-
net/ipv4/netfilter/nf_nat_l3proto_ipv4.c | 2 +-
net/ipv4/netfilter/nf_reject_ipv4.c | 2 +-
net/ipv4/netfilter/nft_chain_route_ipv4.c | 2 +-
net/ipv4/route.c | 7 +-
net/ipv4/syncookies.c | 9 +-
net/ipv4/tcp.c | 2 +
net/ipv4/tcp_bbr.c | 2 +-
net/ipv4/tcp_cong.c | 5 +
net/ipv4/tcp_input.c | 6 +-
net/ipv4/tcp_output.c | 9 +-
net/ipv4/udp.c | 3 +-
net/ipv6/addrconf.c | 3 +-
net/ipv6/addrlabel.c | 26 +-
net/ipv6/ah6.c | 3 +-
net/ipv6/ip6_fib.c | 5 +-
net/ipv6/ip6_gre.c | 16 +-
net/ipv6/ip6_output.c | 40 +-
net/ipv6/netfilter.c | 6 +-
net/ipv6/netfilter/ip6_tables.c | 16 +-
net/ipv6/netfilter/ip6table_mangle.c | 2 +-
net/ipv6/netfilter/nf_nat_l3proto_ipv6.c | 2 +-
net/ipv6/netfilter/nft_chain_route_ipv6.c | 2 +-
net/ipv6/sit.c | 2 -
net/ipv6/syncookies.c | 10 +-
net/netfilter/ipset/ip_set_core.c | 3 +-
net/netfilter/ipset/ip_set_hash_gen.h | 20 +-
net/netfilter/ipvs/ip_vs_core.c | 4 +-
net/netfilter/nf_conntrack_standalone.c | 3 +
net/netfilter/nf_nat_core.c | 1 +
net/netfilter/nf_tables_api.c | 3 +-
net/netfilter/x_tables.c | 49 +-
net/netfilter/xt_RATEEST.c | 3 +
net/sched/cls_tcindex.c | 8 +-
net/sched/sch_api.c | 3 +-
net/sched/sch_choke.c | 2 +-
net/sched/sch_generic.c | 3 +
net/sched/sch_gred.c | 2 +-
net/sched/sch_netem.c | 9 +-
net/sched/sch_red.c | 2 +-
net/sched/sch_sfq.c | 2 +-
net/sctp/input.c | 4 +-
net/sctp/sm_sideeffect.c | 8 +-
net/sctp/transport.c | 2 +-
net/sunrpc/addr.c | 2 +-
net/sunrpc/xprt.c | 65 ++-
net/sunrpc/xprtrdma/module.c | 1 +
net/sunrpc/xprtrdma/transport.c | 1 +
net/sunrpc/xprtsock.c | 4 +
net/tipc/link.c | 9 +-
net/tipc/msg.c | 5 +-
net/tipc/topsrv.c | 10 +-
net/tls/tls_device.c | 5 +-
net/tls/tls_sw.c | 6 +
net/xdp/xsk.c | 8 +-
net/xfrm/xfrm_state.c | 8 +-
security/lsm_audit.c | 7 +-
security/selinux/hooks.c | 16 +-
security/selinux/ibpkey.c | 4 +-
sound/soc/codecs/Kconfig | 1 +
tools/include/uapi/linux/if_link.h | 1 +
tools/perf/builtin-lock.c | 2 +-
tools/perf/util/print_binary.c | 2 +-
.../scripting-engines/trace-event-python.c | 7 +-
tools/perf/util/session.c | 1 +
virt/kvm/arm/mmu.c | 7 +
344 files changed, 3424 insertions(+), 1701 deletions(-)
create mode 100644 lib/test_free_pages.c
--
2.25.1
1
316

[PATCH 001/316] blk-cgroup: prevent rcu_sched detected stalls warnings in blkg_destroy_all()
by Cheng Jian 22 Feb '21
by Cheng Jian 22 Feb '21
22 Feb '21
From: Yu Kuai <yukuai3(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 46357
CVE: NA
---------------------------
test procedures:
a. create 20000 cgroups, and echo "8:0 10000" to
blkio.throttle.write_bps_device
b. echo 1 > /sys/blocd/sda/device/delete
test result:
rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [5/1143]
rcu: 0-...0: (0 ticks this GP) idle=0f2/1/0x4000000000000000 softirq=15507/15507 fq
rcu: (detected by 6, t=60012 jiffies, g=119977, q=27153)
NMI backtrace for cpu 0
CPU: 0 PID: 443 Comm: bash Not tainted 4.19.95-00061-g0bcc83b30eec #63
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-p4
RIP: 0010:blk_throtl_update_limit_valid.isra.0+0x116/0x2a0
Code: 01 00 00 e8 7c dd 74 ff 48 83 bb 78 01 00 00 00 0f 85 54 01 00 00 48 8d bb 88 01 1
RSP: 0018:ffff8881030bf9f0 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffff8880b4f37080 RCX: ffffffff95da0afe
RDX: dffffc0000000000 RSI: ffff888100373980 RDI: ffff8880b4f37208
RBP: ffff888100deca00 R08: ffffffff9528f951 R09: 0000000000000001
R10: ffffed10159dbf56 R11: ffff8880acedfab3 R12: ffff8880b9fda498
R13: ffff8880b9fda4f4 R14: 0000000000000050 R15: ffffffff98b622c0
FS: 00007feb51c51700(0000) GS:ffff888106200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000561619547080 CR3: 0000000102bc9000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
throtl_pd_offline+0x98/0x100
blkg_destroy+0x133/0x630
? blkcg_deactivate_policy+0x2c0/0x2c0
? lock_timer_base+0x65/0x110
blkg_destroy_all+0x7f/0x100
blkcg_exit_queue+0x3f/0xa5
blk_exit_queue+0x69/0xa0
blk_cleanup_queue+0x226/0x360
__scsi_remove_device+0xb4/0x3c0
scsi_remove_device+0x38/0x60
sdev_store_delete+0x74/0x100
? dev_driver_string+0xb0/0xb0
dev_attr_store+0x41/0x70
sysfs_kf_write+0x89/0xc0
kernfs_fop_write+0x1b6/0x2e0
? sysfs_kf_bin_read+0x130/0x130
__vfs_write+0xca/0x420
? kernel_read+0xc0/0xc0
? __alloc_fd+0x16f/0x2d0
? __fd_install+0x95/0x1a0
? handle_mm_fault+0x3e0/0x560
vfs_write+0x11a/0x2f0
ksys_write+0xb9/0x1e0
? __x64_sys_read+0x60/0x60
? kasan_check_write+0x20/0x30
? filp_close+0xb5/0xf0
__x64_sys_write+0x46/0x60
do_syscall_64+0xd9/0x1f0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
The usage of so many blkg is very rare, however, such problem do exist
in theory. In order to avoid such warnings, release 'q->queue_lock' for
a while when a batch of blkg were destroyed.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Tao Hou <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
block/blk-cgroup.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index e592167449aa..c64f0afa27dc 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -364,16 +364,31 @@ static void blkg_destroy(struct blkcg_gq *blkg)
*/
static void blkg_destroy_all(struct request_queue *q)
{
+#define BLKG_DESTROY_BATCH 4096
struct blkcg_gq *blkg, *n;
+ int count;
lockdep_assert_held(q->queue_lock);
+again:
+ count = BLKG_DESTROY_BATCH;
list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
struct blkcg *blkcg = blkg->blkcg;
spin_lock(&blkcg->lock);
blkg_destroy(blkg);
spin_unlock(&blkcg->lock);
+ /*
+ * If the list is too long, the loop can took a long time,
+ * thus relese the lock for a while when a batch of blkg
+ * were destroyed.
+ */
+ if (!--count) {
+ spin_unlock_irq(q->queue_lock);
+ cond_resched();
+ spin_lock_irq(q->queue_lock);
+ goto again;
+ }
}
q->root_blkg = NULL;
--
2.25.1
1
13

[PATCH 001/316] blk-cgroup: prevent rcu_sched detected stalls warnings in blkg_destroy_all()
by Cheng Jian 22 Feb '21
by Cheng Jian 22 Feb '21
22 Feb '21
From: Yu Kuai <yukuai3(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 46357
CVE: NA
---------------------------
test procedures:
a. create 20000 cgroups, and echo "8:0 10000" to
blkio.throttle.write_bps_device
b. echo 1 > /sys/blocd/sda/device/delete
test result:
rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [5/1143]
rcu: 0-...0: (0 ticks this GP) idle=0f2/1/0x4000000000000000 softirq=15507/15507 fq
rcu: (detected by 6, t=60012 jiffies, g=119977, q=27153)
NMI backtrace for cpu 0
CPU: 0 PID: 443 Comm: bash Not tainted 4.19.95-00061-g0bcc83b30eec #63
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-p4
RIP: 0010:blk_throtl_update_limit_valid.isra.0+0x116/0x2a0
Code: 01 00 00 e8 7c dd 74 ff 48 83 bb 78 01 00 00 00 0f 85 54 01 00 00 48 8d bb 88 01 1
RSP: 0018:ffff8881030bf9f0 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffff8880b4f37080 RCX: ffffffff95da0afe
RDX: dffffc0000000000 RSI: ffff888100373980 RDI: ffff8880b4f37208
RBP: ffff888100deca00 R08: ffffffff9528f951 R09: 0000000000000001
R10: ffffed10159dbf56 R11: ffff8880acedfab3 R12: ffff8880b9fda498
R13: ffff8880b9fda4f4 R14: 0000000000000050 R15: ffffffff98b622c0
FS: 00007feb51c51700(0000) GS:ffff888106200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000561619547080 CR3: 0000000102bc9000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
throtl_pd_offline+0x98/0x100
blkg_destroy+0x133/0x630
? blkcg_deactivate_policy+0x2c0/0x2c0
? lock_timer_base+0x65/0x110
blkg_destroy_all+0x7f/0x100
blkcg_exit_queue+0x3f/0xa5
blk_exit_queue+0x69/0xa0
blk_cleanup_queue+0x226/0x360
__scsi_remove_device+0xb4/0x3c0
scsi_remove_device+0x38/0x60
sdev_store_delete+0x74/0x100
? dev_driver_string+0xb0/0xb0
dev_attr_store+0x41/0x70
sysfs_kf_write+0x89/0xc0
kernfs_fop_write+0x1b6/0x2e0
? sysfs_kf_bin_read+0x130/0x130
__vfs_write+0xca/0x420
? kernel_read+0xc0/0xc0
? __alloc_fd+0x16f/0x2d0
? __fd_install+0x95/0x1a0
? handle_mm_fault+0x3e0/0x560
vfs_write+0x11a/0x2f0
ksys_write+0xb9/0x1e0
? __x64_sys_read+0x60/0x60
? kasan_check_write+0x20/0x30
? filp_close+0xb5/0xf0
__x64_sys_write+0x46/0x60
do_syscall_64+0xd9/0x1f0
entry_SYSCALL_64_after_hwframe+0x44/0xa9
The usage of so many blkg is very rare, however, such problem do exist
in theory. In order to avoid such warnings, release 'q->queue_lock' for
a while when a batch of blkg were destroyed.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Tao Hou <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
---
block/blk-cgroup.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index e592167449aa..c64f0afa27dc 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -364,16 +364,31 @@ static void blkg_destroy(struct blkcg_gq *blkg)
*/
static void blkg_destroy_all(struct request_queue *q)
{
+#define BLKG_DESTROY_BATCH 4096
struct blkcg_gq *blkg, *n;
+ int count;
lockdep_assert_held(q->queue_lock);
+again:
+ count = BLKG_DESTROY_BATCH;
list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
struct blkcg *blkcg = blkg->blkcg;
spin_lock(&blkcg->lock);
blkg_destroy(blkg);
spin_unlock(&blkcg->lock);
+ /*
+ * If the list is too long, the loop can took a long time,
+ * thus relese the lock for a while when a batch of blkg
+ * were destroyed.
+ */
+ if (!--count) {
+ spin_unlock_irq(q->queue_lock);
+ cond_resched();
+ spin_lock_irq(q->queue_lock);
+ goto again;
+ }
}
q->root_blkg = NULL;
--
2.25.1
1
4
2
1
申报议题:
2. 讨论设立sig安全联络员及如何缩短sig所属项目的CVE响应时间
魏刚
---Original---
From: "openEuler conference"<public(a)openeuler.io>
Date: Thu, Feb 18, 2021 17:12 PM
To: "dev"<dev(a)openeuler.org>;"kernel"<kernel(a)openeuler.org>;
Subject: [Dev] openEuler kernel sig meeting
您好!
openEuler Kernel SIG 邀请您参加 2021-02-19 10:00 召开的ZOOM会议
会议主题:openEuler kernel sig meeting
会议内容:1. 树莓派补丁合入openEuler 5.10 内核讨论
会议链接:https://zoom.us/j/92887086207?pwd=emdmSVVFekh5eXRSczNNTngrcWYzUT09
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the ZOOM conference will be held at 2021-02-19 10:00,
The subject of the conference is openEuler kernel sig meeting,
Summary: 1. 树莓派补丁合入openEuler 5.10 内核讨论
You can join the meeting at https://zoom.us/j/92887086207?pwd=emdmSVVFekh5eXRSczNNTngrcWYzUT09.
More information
1
0
Arnold Gozum (1):
platform/x86: intel-vbtn: Support for tablet mode on Dell Inspiron
7352
Brian King (1):
scsi: ibmvfc: Set default timeout to avoid crash during migration
Christian Brauner (1):
sysctl: handle overflow in proc_get_long
Eric Dumazet (1):
net_sched: gen_estimator: support large ewma log
Felix Fietkau (1):
mac80211: fix fast-rx encryption check
Greg Kroah-Hartman (1):
Linux 4.19.174
Hans de Goede (1):
platform/x86: touchscreen_dmi: Add swap-x-y quirk for Goodix
touchscreen on Estar Beauty HD tablet
Javed Hasan (1):
scsi: libfc: Avoid invoking response handler twice if ep is already
completed
Josh Poimboeuf (1):
objtool: Don't fail on missing symbol table
Lijun Pan (1):
ibmvnic: Ensure that CRQ entry read are correctly ordered
Martin Wilck (1):
scsi: scsi_transport_srp: Don't block target in failfast state
Michael Ellerman (1):
selftests/powerpc: Only test lwm/stmw on big endian
Pan Bian (1):
net: dsa: bcm_sf2: put device node before return
Peter Zijlstra (3):
x86: __always_inline __{rd,wr}msr()
kthread: Extract KTHREAD_IS_PER_CPU
workqueue: Restrict affinity change to rescuer
Rafael J. Wysocki (1):
ACPI: thermal: Do not call acpi_thermal_check() directly
Tony Lindgren (1):
phy: cpcap-usb: Fix warning for missing regulator_disable
Makefile | 2 +-
arch/x86/include/asm/msr.h | 4 +-
drivers/acpi/thermal.c | 55 +++++++++++++------
drivers/net/dsa/bcm_sf2.c | 8 ++-
drivers/net/ethernet/ibm/ibmvnic.c | 6 ++
drivers/phy/motorola/phy-cpcap-usb.c | 19 +++++--
drivers/platform/x86/intel-vbtn.c | 6 ++
drivers/platform/x86/touchscreen_dmi.c | 18 ++++++
drivers/scsi/ibmvscsi/ibmvfc.c | 4 +-
drivers/scsi/libfc/fc_exch.c | 16 +++++-
drivers/scsi/scsi_transport_srp.c | 9 ++-
include/linux/kthread.h | 3 +
kernel/kthread.c | 27 ++++++++-
kernel/smpboot.c | 1 +
kernel/sysctl.c | 2 +
kernel/workqueue.c | 9 +--
net/core/gen_estimator.c | 11 ++--
net/mac80211/rx.c | 2 +
tools/objtool/elf.c | 7 ++-
.../powerpc/alignment/alignment_handler.c | 5 +-
20 files changed, 168 insertions(+), 46 deletions(-)
--
2.25.1
1
18

[PATCH kernel-4.19 1/3] mmap: fix a compiling error for 'MAP_CHECKNODE'
by Yang Yingliang 09 Feb '21
by Yang Yingliang 09 Feb '21
09 Feb '21
From: Bixuan Cui <cuibixuan(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
MAP_CHECKNODE was defined in uapi/asm-generic/mman.h, that was not
automatically included by mm/mmap.c when building on platforms such as
mips, and result in following compiling error:
mm/mmap.c: In function ‘__do_mmap’:
mm/mmap.c:1581:14: error: ‘MAP_CHECKNODE’ undeclared (first use in this function)
if (flags & MAP_CHECKNODE)
^
mm/mmap.c:1581:14: note: each undeclared identifier is reported only once for each function it appears in
scripts/Makefile.build:303: recipe for target 'mm/mmap.o' failed
Fixes: 56a22a261008 ("arm64/ascend: mm: Add MAP_CHECKNODE flag to check node hugetlb")
Signed-off-by: Bixuan Cui <cuibixuan(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/alpha/include/uapi/asm/mman.h | 1 +
arch/mips/include/uapi/asm/mman.h | 1 +
arch/parisc/include/uapi/asm/mman.h | 1 +
arch/powerpc/include/uapi/asm/mman.h | 1 +
arch/sparc/include/uapi/asm/mman.h | 1 +
arch/xtensa/include/uapi/asm/mman.h | 1 +
6 files changed, 6 insertions(+)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h
index b3acfc00c8ec5..1c7ce2716ad37 100644
--- a/arch/alpha/include/uapi/asm/mman.h
+++ b/arch/alpha/include/uapi/asm/mman.h
@@ -34,6 +34,7 @@
#define MAP_HUGETLB 0x100000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x200000/* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#define MS_ASYNC 1 /* sync memory asynchronously */
#define MS_SYNC 2 /* synchronous memory sync */
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h
index 72a00c746e781..4570a54ac1d90 100644
--- a/arch/mips/include/uapi/asm/mman.h
+++ b/arch/mips/include/uapi/asm/mman.h
@@ -52,6 +52,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
/*
* Flags for msync
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
index 9e989d649e854..06857eb1bee8f 100644
--- a/arch/parisc/include/uapi/asm/mman.h
+++ b/arch/parisc/include/uapi/asm/mman.h
@@ -28,6 +28,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#define MS_SYNC 1 /* synchronous memory sync */
#define MS_ASYNC 2 /* sync memory asynchronously */
diff --git a/arch/powerpc/include/uapi/asm/mman.h b/arch/powerpc/include/uapi/asm/mman.h
index 95f884ada96f1..24354f792b00d 100644
--- a/arch/powerpc/include/uapi/asm/mman.h
+++ b/arch/powerpc/include/uapi/asm/mman.h
@@ -30,6 +30,7 @@
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
/* Override any generic PKEY permission defines */
#define PKEY_DISABLE_EXECUTE 0x4
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 0d1881b8f30d1..214abe17f44a0 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -27,6 +27,7 @@
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#endif /* _UAPI__SPARC_MMAN_H__ */
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h
index f584a590bb001..2c9d705602382 100644
--- a/arch/xtensa/include/uapi/asm/mman.h
+++ b/arch/xtensa/include/uapi/asm/mman.h
@@ -59,6 +59,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#ifdef CONFIG_MMAP_ALLOW_UNINITIALIZED
# define MAP_UNINITIALIZED 0x4000000 /* For anonymous mmap, memory could be
* uninitialized */
--
2.25.1
1
2
Andrea Righi (1):
leds: trigger: fix potential deadlock with libata
Baoquan He (1):
kernel: kexec: remove the lock operation of system_transition_mutex
Bartosz Golaszewski (1):
iommu/vt-d: Don't dereference iommu_device if IOMMU_API is not built
Claudiu Beznea (1):
drivers: soc: atmel: add null entry at the end of
at91_soc_allowed_list[]
Dan Carpenter (1):
can: dev: prevent potential information leak in can_fill_info()
David Woodhouse (2):
xen: Fix XenStore initialisation for XS_LOCAL
iommu/vt-d: Gracefully handle DMAR units with no supported address
widths
Eyal Birger (1):
xfrm: fix disable_xfrm sysctl when used on xfrm interfaces
Giacinto Cifelli (1):
net: usb: qmi_wwan: added support for Thales Cinterion PLSx3 modem
family
Greg Kroah-Hartman (1):
Linux 4.19.173
Ivan Vecera (1):
team: protect features update by RCU to avoid deadlock
Jay Zhou (1):
KVM: x86: get smi pending status correctly
Johannes Berg (4):
wext: fix NULL-ptr-dereference with cfg80211's lack of commit()
iwlwifi: pcie: use jiffies for memory read spin time limit
iwlwifi: pcie: reschedule in long-running memory reads
mac80211: pause TX while changing interface type
Josef Bacik (1):
nbd: freeze the queue while we're adding connections
Kai-Heng Feng (1):
ACPI: sysfs: Prefer "compatible" modalias
Kamal Heib (1):
RDMA/cxgb4: Fix the reported max_recv_sge value
Koen Vandeputte (1):
ARM: dts: imx6qdl-gw52xx: fix duplicate regulator naming
Laurent Badel (1):
PM: hibernate: flush swap writer after marking
Like Xu (1):
KVM: x86/pmu: Fix HW_REF_CPU_CYCLES event pseudo-encoding in
intel_arch_events[]
Lorenzo Bianconi (2):
mt7601u: fix kernel crash unplugging the device
mt7601u: fix rx buffer refcounting
Max Krummenacher (1):
ARM: imx: build suspend-imx6.S with arm instruction set
Pablo Neira Ayuso (1):
netfilter: nft_dynset: add timeout extension to template
Pan Bian (2):
NFC: fix resource leak when target index is invalid
NFC: fix possible resource leak
Pengcheng Yang (1):
tcp: fix TLP timer not set when CA_STATE changes from DISORDER to OPEN
Roger Pau Monne (2):
xen/privcmd: allow fetching resource sizes
xen-blkfront: allow discard-* nodes to be optional
Roi Dayan (1):
net/mlx5: Fix memory leak on flow table creation error flow
Sean Young (1):
media: rc: ensure that uevent can be read directly after rc device
register
Shmulik Ladkani (1):
xfrm: Fix oops in xfrm_replay_advance_bmp
Sudeep Holla (1):
drivers: soc: atmel: Avoid calling at91_soc_init on non AT91 SoCs
Takashi Iwai (1):
ALSA: hda/via: Apply the workaround generically for Clevo machines
Takeshi Misawa (1):
rxrpc: Fix memory leak in rxrpc_lookup_local
Trond Myklebust (1):
pNFS/NFSv4: Fix a layout segment leak in pnfs_layout_process()
Makefile | 2 +-
arch/arm/boot/dts/imx6qdl-gw52xx.dtsi | 2 +-
arch/arm/mach-imx/suspend-imx6.S | 1 +
arch/x86/kvm/pmu_intel.c | 2 +-
arch/x86/kvm/x86.c | 5 +++
drivers/acpi/device_sysfs.c | 20 +++------
drivers/block/nbd.c | 8 ++++
drivers/block/xen-blkfront.c | 20 +++------
drivers/infiniband/hw/cxgb4/qp.c | 2 +-
drivers/iommu/dmar.c | 45 +++++++++++++------
drivers/leds/led-triggers.c | 10 +++--
drivers/media/rc/rc-main.c | 4 +-
drivers/net/can/dev.c | 2 +-
.../net/ethernet/mellanox/mlx5/core/fs_core.c | 1 +
drivers/net/team/team.c | 6 +--
drivers/net/usb/qmi_wwan.c | 1 +
.../net/wireless/intel/iwlwifi/pcie/trans.c | 14 +++---
drivers/net/wireless/mediatek/mt7601u/dma.c | 5 +--
drivers/soc/atmel/soc.c | 13 ++++++
drivers/xen/privcmd.c | 25 ++++++++---
drivers/xen/xenbus/xenbus_probe.c | 31 +++++++++++++
fs/nfs/pnfs.c | 1 +
include/linux/intel-iommu.h | 2 +
include/net/tcp.h | 2 +-
kernel/kexec_core.c | 2 -
kernel/power/swap.c | 2 +-
net/ipv4/tcp_input.c | 10 +++--
net/ipv4/tcp_recovery.c | 5 ++-
net/mac80211/ieee80211_i.h | 1 +
net/mac80211/iface.c | 6 +++
net/netfilter/nft_dynset.c | 4 +-
net/nfc/netlink.c | 1 +
net/nfc/rawsock.c | 2 +-
net/rxrpc/call_accept.c | 1 +
net/wireless/wext-core.c | 5 ++-
net/xfrm/xfrm_input.c | 2 +-
net/xfrm/xfrm_policy.c | 4 +-
sound/pci/hda/patch_via.c | 2 +-
38 files changed, 183 insertions(+), 88 deletions(-)
--
2.25.1
1
38

07 Feb '21
From: Zheng Bin <zhengbin13(a)huawei.com>
stable inclusion
from linux-4.19.133
commit 306290bad932950adccfbf99c27b70c105c2c183
--------------------------------
[ Upstream commit 579dd91ab3a5446b148e7f179b6596b270dace46 ]
When adding first socket to nbd, if nsock's allocation failed, the data
structure member "config->socks" was reallocated, but the data structure
member "config->num_connections" was not updated. A memory leak will occur
then because the function "nbd_config_put" will free "config->socks" only
when "config->num_connections" is not zero.
Fixes: 03bf73c315ed ("nbd: prevent memory leak")
Reported-by: syzbot+934037347002901b8d2a(a)syzkaller.appspotmail.com
Signed-off-by: Zheng Bin <zhengbin13(a)huawei.com>
Reviewed-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/block/nbd.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 49f98b9b2777..97fcb040056c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -993,25 +993,26 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
test_bit(NBD_RT_BOUND, &config->runtime_flags))) {
dev_err(disk_to_dev(nbd->disk),
"Device being setup by another task");
- sockfd_put(sock);
- return -EBUSY;
+ err = -EBUSY;
+ goto put_socket;
+ }
+
+ nsock = kzalloc(sizeof(*nsock), GFP_KERNEL);
+ if (!nsock) {
+ err = -ENOMEM;
+ goto put_socket;
}
socks = krealloc(config->socks, (config->num_connections + 1) *
sizeof(struct nbd_sock *), GFP_KERNEL);
if (!socks) {
- sockfd_put(sock);
- return -ENOMEM;
+ kfree(nsock);
+ err = -ENOMEM;
+ goto put_socket;
}
config->socks = socks;
- nsock = kzalloc(sizeof(struct nbd_sock), GFP_KERNEL);
- if (!nsock) {
- sockfd_put(sock);
- return -ENOMEM;
- }
-
nsock->fallback_index = -1;
nsock->dead = false;
mutex_init(&nsock->tx_lock);
@@ -1023,6 +1024,10 @@ static int nbd_add_socket(struct nbd_device *nbd, unsigned long arg,
atomic_inc(&config->live_connections);
return 0;
+
+put_socket:
+ sockfd_put(sock);
+ return err;
}
static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS 1/7] futex: Ensure the correct return value from futex_lock_pi()
by Yang Yingliang 04 Feb '21
by Yang Yingliang 04 Feb '21
04 Feb '21
From: Thomas Gleixner <tglx(a)linutronix.de>
stable inclusion
from linux-4.19.172
commit 72f38fffa4758b878f819f8a47761b3f03443f36
CVE: CVE-2021-3347
--------------------------------
commit 12bb3f7f1b03d5913b3f9d4236a488aa7774dfe9 upstream
In case that futex_lock_pi() was aborted by a signal or a timeout and the
task returned without acquiring the rtmutex, but is the designated owner of
the futex due to a concurrent futex_unlock_pi() fixup_owner() is invoked to
establish consistent state. In that case it invokes fixup_pi_state_owner()
which in turn tries to acquire the rtmutex again. If that succeeds then it
does not propagate this success to fixup_owner() and futex_lock_pi()
returns -EINTR or -ETIMEOUT despite having the futex locked.
Return success from fixup_pi_state_owner() in all cases where the current
task owns the rtmutex and therefore the futex and propagate it correctly
through fixup_owner(). Fixup the other callsite which does not expect a
positive return value.
Fixes: c1e2f0eaf015 ("futex: Avoid violating the 10th rule of futex")
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: Wei Li <liwei391(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/futex.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/kernel/futex.c b/kernel/futex.c
index 28b321e23053..ca9389bd110f 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2414,8 +2414,8 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
}
if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) {
- /* We got the lock after all, nothing to fix. */
- ret = 0;
+ /* We got the lock. pi_state is correct. Tell caller. */
+ ret = 1;
goto out_unlock;
}
@@ -2431,7 +2431,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
* We raced against a concurrent self; things are
* already fixed up. Nothing to do.
*/
- ret = 0;
+ ret = 1;
goto out_unlock;
}
newowner = argowner;
@@ -2477,7 +2477,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
raw_spin_unlock(&newowner->pi_lock);
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
- return 0;
+ return argowner == current;
/*
* In order to reschedule or handle a page fault, we need to drop the
@@ -2519,7 +2519,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
* Check if someone else fixed it for us:
*/
if (pi_state->owner != oldowner) {
- ret = 0;
+ ret = argowner == current;
goto out_unlock;
}
@@ -2552,8 +2552,6 @@ static long futex_wait_restart(struct restart_block *restart);
*/
static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
{
- int ret = 0;
-
if (locked) {
/*
* Got the lock. We might not be the anticipated owner if we
@@ -2564,8 +2562,8 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
* stable state, anything else needs more attention.
*/
if (q->pi_state->owner != current)
- ret = fixup_pi_state_owner(uaddr, q, current);
- goto out;
+ return fixup_pi_state_owner(uaddr, q, current);
+ return 1;
}
/*
@@ -2576,10 +2574,8 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
* Another speculative read; pi_state->owner == current is unstable
* but needs our attention.
*/
- if (q->pi_state->owner == current) {
- ret = fixup_pi_state_owner(uaddr, q, NULL);
- goto out;
- }
+ if (q->pi_state->owner == current)
+ return fixup_pi_state_owner(uaddr, q, NULL);
/*
* Paranoia check. If we did not take the lock, then we should not be
@@ -2592,8 +2588,7 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked)
q->pi_state->owner);
}
-out:
- return ret ? ret : locked;
+ return 0;
}
/**
@@ -3315,7 +3310,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
if (q.pi_state && (q.pi_state->owner != current)) {
spin_lock(q.lock_ptr);
ret = fixup_pi_state_owner(uaddr2, &q, current);
- if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+ if (ret < 0 && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
pi_state = q.pi_state;
get_pi_state(pi_state);
}
@@ -3325,6 +3320,11 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
*/
put_pi_state(q.pi_state);
spin_unlock(q.lock_ptr);
+ /*
+ * Adjust the return value. It's either -EFAULT or
+ * success (1) but the caller expects 0 for success.
+ */
+ ret = ret < 0 ? ret : 0;
}
} else {
struct rt_mutex *pi_mutex;
--
2.25.1
1
6

[PATCH openEuler-1.0-LTS 1/3] netfilter: clear skb->next in NF_HOOK_LIST()
by Yang Yingliang 04 Feb '21
by Yang Yingliang 04 Feb '21
04 Feb '21
From: Cong Wang <cong.wang(a)bytedance.com>
stable inclusion
from linux-4.19.161
commit 5460d62d661c0fc53bfe83493821b1dc3dc969f4
--------------------------------
NF_HOOK_LIST() uses list_del() to remove skb from the linked list,
however, it is not sufficient as skb->next still points to other
skb. We should just call skb_list_del_init() to clear skb->next,
like the rest places which using skb list.
This has been fixed in upstream by commit ca58fbe06c54
("netfilter: add and use nf_hook_slow_list()").
Fixes: 9f17dbf04ddf ("netfilter: fix use-after-free in NF_HOOK_LIST")
Reported-by: liuzx(a)knownsec.com
Tested-by: liuzx(a)knownsec.com
Cc: Florian Westphal <fw(a)strlen.de>
Cc: Edward Cree <ecree(a)solarflare.com>
Cc: stable(a)vger.kernel.org # between 4.19 and 5.4
Signed-off-by: Cong Wang <cong.wang(a)bytedance.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/netfilter.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
index 72cb19c3db6a..9460a5635c90 100644
--- a/include/linux/netfilter.h
+++ b/include/linux/netfilter.h
@@ -300,7 +300,7 @@ NF_HOOK_LIST(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
INIT_LIST_HEAD(&sublist);
list_for_each_entry_safe(skb, next, head, list) {
- list_del(&skb->list);
+ skb_list_del_init(skb);
if (nf_hook(pf, hook, net, sk, skb, in, out, okfn) == 1)
list_add_tail(&skb->list, &sublist);
}
--
2.25.1
1
2

04 Feb '21
From: Mark Rutland <mark.rutland(a)arm.com>
mainline inclusion
from mainline-v5.5-rc1
commit bd8b21d3dd661658addc1cd4cc869bab11d28596
category: bugfix
bugzilla: 25285
CVE: NA
--------------------------------
When we load a module, we have to perform some special work for a couple
of named sections. To do this, we iterate over all of the module's
sections, and perform work for each section we recognize.
To make it easier to handle the unexpected absence of a section, and to
make the section-specific logic easer to read, let's factor the section
search into a helper. Similar is already done in the core module loader,
and other architectures (and ideally we'd unify these in future).
If we expect a module to have an ftrace trampoline section, but it
doesn't have one, we'll now reject loading the module. When
ARM64_MODULE_PLTS is selected, any correctly built module should have
one (and this is assumed by arm64's ftrace PLT code) and the absence of
such a section implies something has gone wrong at build time.
Subsequent patches will make use of the new helper.
Signed-off-by: Mark Rutland <mark.rutland(a)arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Reviewed-by: Torsten Duwe <duwe(a)suse.de>
Tested-by: Amit Daniel Kachhap <amit.kachhap(a)arm.com>
Tested-by: Torsten Duwe <duwe(a)suse.de>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: James Morse <james.morse(a)arm.com>
Cc: Will Deacon <will(a)kernel.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/module.c | 35 ++++++++++++++++++++++++++---------
1 file changed, 26 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 834ecf108b53..55b00bfb704f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -484,22 +484,39 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
return -ENOEXEC;
}
-int module_finalize(const Elf_Ehdr *hdr,
- const Elf_Shdr *sechdrs,
- struct module *me)
+static const Elf_Shdr *find_section(const Elf_Ehdr *hdr,
+ const Elf_Shdr *sechdrs,
+ const char *name)
{
const Elf_Shdr *s, *se;
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
- if (strcmp(".altinstructions", secstrs + s->sh_name) == 0)
- apply_alternatives_module((void *)s->sh_addr, s->sh_size);
+ if (strcmp(name, secstrs + s->sh_name) == 0)
+ return s;
+ }
+
+ return NULL;
+}
+
+int module_finalize(const Elf_Ehdr *hdr,
+ const Elf_Shdr *sechdrs,
+ struct module *me)
+{
+ const Elf_Shdr *s;
+
+ s = find_section(hdr, sechdrs, ".altinstructions");
+ if (s)
+ apply_alternatives_module((void *)s->sh_addr, s->sh_size);
+
#ifdef CONFIG_ARM64_MODULE_PLTS
- if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) &&
- !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
- me->arch.ftrace_trampoline = (void *)s->sh_addr;
-#endif
+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE)) {
+ s = find_section(hdr, sechdrs, ".text.ftrace_trampoline");
+ if (!s)
+ return -ENOEXEC;
+ me->arch.ftrace_trampoline = (void *)s->sh_addr;
}
+#endif
return 0;
}
--
2.25.1
1
3
Baruch Siach (1):
gpio: mvebu: fix pwm .get_state period calculation
Eric Biggers (1):
fs: fix lazytime expiration handling in __writeback_single_inode()
Gaurav Kohli (1):
tracing: Fix race in trace_open and buffer resize call
Greg Kroah-Hartman (1):
Linux 4.19.172
Jan Kara (1):
writeback: Drop I_DIRTY_TIME_EXPIRE
Jason Gerecke (1):
HID: wacom: Correct NULL dereference on AES pen proximity
Jean-Philippe Brucker (1):
tools: Factor HOSTCC, HOSTLD, HOSTAR definitions
Mikulas Patocka (1):
dm integrity: conditionally disable "recalculate" feature
Thomas Gleixner (18):
futex: Move futex exit handling into futex code
futex: Replace PF_EXITPIDONE with a state
exit/exec: Seperate mm_release()
futex: Split futex_mm_release() for exit/exec
futex: Set task::futex_state to DEAD right after handling futex exit
futex: Mark the begin of futex exit explicitly
futex: Sanitize exit state handling
futex: Provide state handling for exec() as well
futex: Add mutex around futex exit
futex: Provide distinct return value when owner is exiting
futex: Prevent exit livelock
futex: Ensure the correct return value from futex_lock_pi()
futex: Replace pointless printk in fixup_owner()
futex: Provide and use pi_state_update_owner()
rtmutex: Remove unused argument from rt_mutex_proxy_unlock()
futex: Use pi_state_update_owner() in put_pi_state()
futex: Simplify fixup_pi_state_owner()
futex: Handle faults correctly for PI futexes
Wang Hai (1):
Revert "mm/slub: fix a memory leak in sysfs_slab_add()"
Documentation/device-mapper/dm-integrity.txt | 7 +
Makefile | 2 +-
drivers/gpio/gpio-mvebu.c | 25 +-
drivers/hid/wacom_sys.c | 7 +-
drivers/hid/wacom_wac.h | 2 +-
drivers/md/dm-integrity.c | 24 +-
fs/exec.c | 2 +-
fs/ext4/inode.c | 2 +-
fs/fs-writeback.c | 36 +-
fs/xfs/xfs_trans_inode.c | 4 +-
include/linux/compat.h | 2 -
include/linux/fs.h | 1 -
include/linux/futex.h | 40 +-
include/linux/sched.h | 3 +-
include/linux/sched/mm.h | 6 +-
include/trace/events/writeback.h | 1 -
kernel/exit.c | 30 +-
kernel/fork.c | 40 +-
kernel/futex.c | 485 +++++++++++++------
kernel/locking/rtmutex.c | 3 +-
kernel/locking/rtmutex_common.h | 3 +-
kernel/trace/ring_buffer.c | 4 +
mm/slub.c | 4 +-
tools/build/Makefile | 4 -
tools/objtool/Makefile | 9 -
tools/perf/Makefile.perf | 4 -
tools/power/acpi/Makefile.config | 1 -
tools/scripts/Makefile.include | 10 +
28 files changed, 459 insertions(+), 302 deletions(-)
--
2.25.1
1
27

04 Feb '21
From: Florian Westphal <fw(a)strlen.de>
mainline inclusion
from mainline-v5.5-rc1
commit ca58fbe06c54795f00db79e447f94c2028d30124
category: bugfix
bugzilla: NA
CVE: CVE-2021-20177
--------------------------------
At this time, NF_HOOK_LIST() macro will iterate the list and then calls
nf_hook() for each individual skb.
This makes it so the entire list is passed into the netfilter core.
The advantage is that we only need to fetch the rule blob once per list
instead of per-skb.
NF_HOOK_LIST now only works for ipv4 and ipv6, as those are the only
callers.
v2: use skb_list_del_init() instead of list_del (Edward Cree)
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Acked-by: Edward Cree <ecree(a)solarflare.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Conflicts:
include/linux/netfilter.h
net/netfilter/core.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/netfilter.h | 41 +++++++++++++++++++++++++++++----------
net/netfilter/core.c | 20 +++++++++++++++++++
2 files changed, 51 insertions(+), 10 deletions(-)
diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
index 9460a5635c90..8bf57abee9e4 100644
--- a/include/linux/netfilter.h
+++ b/include/linux/netfilter.h
@@ -183,6 +183,8 @@ extern struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
const struct nf_hook_entries *e, unsigned int i);
+void nf_hook_slow_list(struct list_head *head, struct nf_hook_state *state,
+ const struct nf_hook_entries *e);
/**
* nf_hook - call a netfilter hook
*
@@ -295,17 +297,36 @@ NF_HOOK_LIST(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct list_head *head, struct net_device *in, struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{
- struct sk_buff *skb, *next;
- struct list_head sublist;
-
- INIT_LIST_HEAD(&sublist);
- list_for_each_entry_safe(skb, next, head, list) {
- skb_list_del_init(skb);
- if (nf_hook(pf, hook, net, sk, skb, in, out, okfn) == 1)
- list_add_tail(&skb->list, &sublist);
+ struct nf_hook_entries *hook_head = NULL;
+
+#ifdef CONFIG_JUMP_LABEL
+ if (__builtin_constant_p(pf) &&
+ __builtin_constant_p(hook) &&
+ !static_key_false(&nf_hooks_needed[pf][hook]))
+ return;
+#endif
+
+ rcu_read_lock();
+ switch (pf) {
+ case NFPROTO_IPV4:
+ hook_head = rcu_dereference(net->nf.hooks_ipv4[hook]);
+ break;
+ case NFPROTO_IPV6:
+ hook_head = rcu_dereference(net->nf.hooks_ipv6[hook]);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
}
- /* Put passed packets back on main list */
- list_splice(&sublist, head);
+
+ if (hook_head) {
+ struct nf_hook_state state;
+
+ nf_hook_state_init(&state, hook, pf, in, out, sk, net, okfn);
+
+ nf_hook_slow_list(head, &state, hook_head);
+ }
+ rcu_read_unlock();
}
/* Call setsockopt() */
diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index 93aaec3a54ec..3f0bdc728f89 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -536,6 +536,26 @@ int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
EXPORT_SYMBOL(nf_hook_slow);
+void nf_hook_slow_list(struct list_head *head, struct nf_hook_state *state,
+ const struct nf_hook_entries *e)
+{
+ struct sk_buff *skb, *next;
+ struct list_head sublist;
+ int ret;
+
+ INIT_LIST_HEAD(&sublist);
+
+ list_for_each_entry_safe(skb, next, head, list) {
+ skb_list_del_init(skb);
+ ret = nf_hook_slow(skb, state, e, 0);
+ if (ret == 1)
+ list_add_tail(&skb->list, &sublist);
+ }
+ /* Put passed packets back on main list */
+ list_splice(&sublist, head);
+}
+EXPORT_SYMBOL(nf_hook_slow_list);
+
int skb_make_writable(struct sk_buff *skb, unsigned int writable_len)
{
if (writable_len > skb->len)
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] scsi: target: Fix XCOPY NAA identifier lookup
by Yang Yingliang 04 Feb '21
by Yang Yingliang 04 Feb '21
04 Feb '21
From: David Disseldorp <ddiss(a)suse.de>
stable inclusion
from linux-4.19.167
commit fff1180d24e68d697f98642d71444316036a81ff
CVE: CVE-2020-28374
--------------------------------
commit 2896c93811e39d63a4d9b63ccf12a8fbc226e5e4 upstream.
When attempting to match EXTENDED COPY CSCD descriptors with corresponding
se_devices, target_xcopy_locate_se_dev_e4() currently iterates over LIO's
global devices list which includes all configured backstores.
This change ensures that only initiator-accessible backstores are
considered during CSCD descriptor lookup, according to the session's
se_node_acl LUN list.
To avoid LUN removal race conditions, device pinning is changed from being
configfs based to instead using the se_node_acl lun_ref.
Reference: CVE-2020-28374
Fixes: cbf031f425fd ("target: Add support for EXTENDED_COPY copy offload emulation")
Reviewed-by: Lee Duncan <lduncan(a)suse.com>
Signed-off-by: David Disseldorp <ddiss(a)suse.de>
Signed-off-by: Mike Christie <michael.christie(a)oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/target/target_core_xcopy.c | 119 +++++++++++++++++------------
drivers/target/target_core_xcopy.h | 1 +
2 files changed, 71 insertions(+), 49 deletions(-)
diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
index 7cdb5d7f6538..1709b8a99bd7 100644
--- a/drivers/target/target_core_xcopy.c
+++ b/drivers/target/target_core_xcopy.c
@@ -55,60 +55,83 @@ static int target_xcopy_gen_naa_ieee(struct se_device *dev, unsigned char *buf)
return 0;
}
-struct xcopy_dev_search_info {
- const unsigned char *dev_wwn;
- struct se_device *found_dev;
-};
-
+/**
+ * target_xcopy_locate_se_dev_e4_iter - compare XCOPY NAA device identifiers
+ *
+ * @se_dev: device being considered for match
+ * @dev_wwn: XCOPY requested NAA dev_wwn
+ * @return: 1 on match, 0 on no-match
+ */
static int target_xcopy_locate_se_dev_e4_iter(struct se_device *se_dev,
- void *data)
+ const unsigned char *dev_wwn)
{
- struct xcopy_dev_search_info *info = data;
unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
int rc;
- if (!se_dev->dev_attrib.emulate_3pc)
+ if (!se_dev->dev_attrib.emulate_3pc) {
+ pr_debug("XCOPY: emulate_3pc disabled on se_dev %p\n", se_dev);
return 0;
+ }
memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN);
target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]);
- rc = memcmp(&tmp_dev_wwn[0], info->dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
- if (rc != 0)
- return 0;
-
- info->found_dev = se_dev;
- pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
-
- rc = target_depend_item(&se_dev->dev_group.cg_item);
+ rc = memcmp(&tmp_dev_wwn[0], dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN);
if (rc != 0) {
- pr_err("configfs_depend_item attempt failed: %d for se_dev: %p\n",
- rc, se_dev);
- return rc;
+ pr_debug("XCOPY: skip non-matching: %*ph\n",
+ XCOPY_NAA_IEEE_REGEX_LEN, tmp_dev_wwn);
+ return 0;
}
+ pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev);
- pr_debug("Called configfs_depend_item for se_dev: %p se_dev->se_dev_group: %p\n",
- se_dev, &se_dev->dev_group);
return 1;
}
-static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn,
- struct se_device **found_dev)
+static int target_xcopy_locate_se_dev_e4(struct se_session *sess,
+ const unsigned char *dev_wwn,
+ struct se_device **_found_dev,
+ struct percpu_ref **_found_lun_ref)
{
- struct xcopy_dev_search_info info;
- int ret;
-
- memset(&info, 0, sizeof(info));
- info.dev_wwn = dev_wwn;
-
- ret = target_for_each_device(target_xcopy_locate_se_dev_e4_iter, &info);
- if (ret == 1) {
- *found_dev = info.found_dev;
- return 0;
- } else {
- pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
- return -EINVAL;
+ struct se_dev_entry *deve;
+ struct se_node_acl *nacl;
+ struct se_lun *this_lun = NULL;
+ struct se_device *found_dev = NULL;
+
+ /* cmd with NULL sess indicates no associated $FABRIC_MOD */
+ if (!sess)
+ goto err_out;
+
+ pr_debug("XCOPY 0xe4: searching for: %*ph\n",
+ XCOPY_NAA_IEEE_REGEX_LEN, dev_wwn);
+
+ nacl = sess->se_node_acl;
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(deve, &nacl->lun_entry_hlist, link) {
+ struct se_device *this_dev;
+ int rc;
+
+ this_lun = rcu_dereference(deve->se_lun);
+ this_dev = rcu_dereference_raw(this_lun->lun_se_dev);
+
+ rc = target_xcopy_locate_se_dev_e4_iter(this_dev, dev_wwn);
+ if (rc) {
+ if (percpu_ref_tryget_live(&this_lun->lun_ref))
+ found_dev = this_dev;
+ break;
+ }
}
+ rcu_read_unlock();
+ if (found_dev == NULL)
+ goto err_out;
+
+ pr_debug("lun_ref held for se_dev: %p se_dev->se_dev_group: %p\n",
+ found_dev, &found_dev->dev_group);
+ *_found_dev = found_dev;
+ *_found_lun_ref = &this_lun->lun_ref;
+ return 0;
+err_out:
+ pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n");
+ return -EINVAL;
}
static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop,
@@ -255,12 +278,16 @@ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd,
switch (xop->op_origin) {
case XCOL_SOURCE_RECV_OP:
- rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn,
- &xop->dst_dev);
+ rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
+ xop->dst_tid_wwn,
+ &xop->dst_dev,
+ &xop->remote_lun_ref);
break;
case XCOL_DEST_RECV_OP:
- rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn,
- &xop->src_dev);
+ rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess,
+ xop->src_tid_wwn,
+ &xop->src_dev,
+ &xop->remote_lun_ref);
break;
default:
pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - "
@@ -412,18 +439,12 @@ static int xcopy_pt_get_cmd_state(struct se_cmd *se_cmd)
static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop)
{
- struct se_device *remote_dev;
-
if (xop->op_origin == XCOL_SOURCE_RECV_OP)
- remote_dev = xop->dst_dev;
+ pr_debug("putting dst lun_ref for %p\n", xop->dst_dev);
else
- remote_dev = xop->src_dev;
-
- pr_debug("Calling configfs_undepend_item for"
- " remote_dev: %p remote_dev->dev_group: %p\n",
- remote_dev, &remote_dev->dev_group.cg_item);
+ pr_debug("putting src lun_ref for %p\n", xop->src_dev);
- target_undepend_item(&remote_dev->dev_group.cg_item);
+ percpu_ref_put(xop->remote_lun_ref);
}
static void xcopy_pt_release_cmd(struct se_cmd *se_cmd)
diff --git a/drivers/target/target_core_xcopy.h b/drivers/target/target_core_xcopy.h
index 26ba4c3c9cff..974bc1e19ff2 100644
--- a/drivers/target/target_core_xcopy.h
+++ b/drivers/target/target_core_xcopy.h
@@ -29,6 +29,7 @@ struct xcopy_op {
struct se_device *dst_dev;
unsigned char dst_tid_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
unsigned char local_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN];
+ struct percpu_ref *remote_lun_ref;
sector_t src_lba;
sector_t dst_lba;
--
2.25.1
1
0

30 Jan '21
From: Florian Westphal <fw(a)strlen.de>
mainline inclusion
from mainline-v5.5-rc1
commit ca58fbe06c54795f00db79e447f94c2028d30124
category: bugfix
bugzilla: NA
CVE: CVE-2021-20177
--------------------------------
At this time, NF_HOOK_LIST() macro will iterate the list and then calls
nf_hook() for each individual skb.
This makes it so the entire list is passed into the netfilter core.
The advantage is that we only need to fetch the rule blob once per list
instead of per-skb.
NF_HOOK_LIST now only works for ipv4 and ipv6, as those are the only
callers.
v2: use skb_list_del_init() instead of list_del (Edward Cree)
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Acked-by: Edward Cree <ecree(a)solarflare.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Conflicts:
include/linux/netfilter.h
net/netfilter/core.c
[yyl: adjust context]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/netfilter.h | 41 +++++++++++++++++++++++++++++----------
net/netfilter/core.c | 20 +++++++++++++++++++
2 files changed, 51 insertions(+), 10 deletions(-)
diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
index 9460a5635c90..8bf57abee9e4 100644
--- a/include/linux/netfilter.h
+++ b/include/linux/netfilter.h
@@ -183,6 +183,8 @@ extern struct static_key nf_hooks_needed[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
const struct nf_hook_entries *e, unsigned int i);
+void nf_hook_slow_list(struct list_head *head, struct nf_hook_state *state,
+ const struct nf_hook_entries *e);
/**
* nf_hook - call a netfilter hook
*
@@ -295,17 +297,36 @@ NF_HOOK_LIST(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct list_head *head, struct net_device *in, struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{
- struct sk_buff *skb, *next;
- struct list_head sublist;
-
- INIT_LIST_HEAD(&sublist);
- list_for_each_entry_safe(skb, next, head, list) {
- skb_list_del_init(skb);
- if (nf_hook(pf, hook, net, sk, skb, in, out, okfn) == 1)
- list_add_tail(&skb->list, &sublist);
+ struct nf_hook_entries *hook_head = NULL;
+
+#ifdef CONFIG_JUMP_LABEL
+ if (__builtin_constant_p(pf) &&
+ __builtin_constant_p(hook) &&
+ !static_key_false(&nf_hooks_needed[pf][hook]))
+ return;
+#endif
+
+ rcu_read_lock();
+ switch (pf) {
+ case NFPROTO_IPV4:
+ hook_head = rcu_dereference(net->nf.hooks_ipv4[hook]);
+ break;
+ case NFPROTO_IPV6:
+ hook_head = rcu_dereference(net->nf.hooks_ipv6[hook]);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
}
- /* Put passed packets back on main list */
- list_splice(&sublist, head);
+
+ if (hook_head) {
+ struct nf_hook_state state;
+
+ nf_hook_state_init(&state, hook, pf, in, out, sk, net, okfn);
+
+ nf_hook_slow_list(head, &state, hook_head);
+ }
+ rcu_read_unlock();
}
/* Call setsockopt() */
diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index 93aaec3a54ec..3f0bdc728f89 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -536,6 +536,26 @@ int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
EXPORT_SYMBOL(nf_hook_slow);
+void nf_hook_slow_list(struct list_head *head, struct nf_hook_state *state,
+ const struct nf_hook_entries *e)
+{
+ struct sk_buff *skb, *next;
+ struct list_head sublist;
+ int ret;
+
+ INIT_LIST_HEAD(&sublist);
+
+ list_for_each_entry_safe(skb, next, head, list) {
+ skb_list_del_init(skb);
+ ret = nf_hook_slow(skb, state, e, 0);
+ if (ret == 1)
+ list_add_tail(&skb->list, &sublist);
+ }
+ /* Put passed packets back on main list */
+ list_splice(&sublist, head);
+}
+EXPORT_SYMBOL(nf_hook_slow_list);
+
int skb_make_writable(struct sk_buff *skb, unsigned int writable_len)
{
if (writable_len > skb->len)
--
2.25.1
1
1
Alex Leibovich (1):
mmc: sdhci-xenon: fix 1.8v regulator stabilization
Alexander Lobakin (1):
skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too
Alexander Shishkin (1):
intel_th: pci: Add Alder Lake-P support
Arnd Bergmann (1):
scsi: megaraid_sas: Fix MEGASAS_IOC_FIRMWARE regression
Ben Skeggs (5):
drm/nouveau/bios: fix issue shadowing expansion ROMs
drm/nouveau/privring: ack interrupts the same way as RM
drm/nouveau/i2c/gm200: increase width of aux semaphore owner fields
drm/nouveau/mmu: fix vram heap sizing
drm/nouveau/kms/nv50-: fix case where notifier buffer is at offset 0
Can Guo (1):
scsi: ufs: Correct the LUN used in eh_device_reset_handler() callback
Cezary Rojewski (1):
ASoC: Intel: haswell: Add missing pm_ops
Damien Le Moal (1):
riscv: Fix kernel time_init()
Dan Carpenter (1):
net: dsa: b53: fix an off by one in checking "vlan->vid"
David Woodhouse (1):
xen: Fix event channel callback via INTX/GSI
Eric Dumazet (2):
net_sched: avoid shift-out-of-bounds in tcindex_set_parms()
net_sched: reject silly cell_log in qdisc_get_rtab()
Eugene Korenevsky (1):
ehci: fix EHCI host controller initialization sequence
Geert Uytterhoeven (1):
sh_eth: Fix power down vs. is_opened flag ordering
Greg Kroah-Hartman (1):
Linux 4.19.171
Guillaume Nault (2):
netfilter: rpfilter: mask ecn bits before fib lookup
udp: mask TOS bits in udp_v4_early_demux()
Hangbin Liu (1):
selftests: net: fib_tests: remove duplicate log test
Hannes Reinecke (1):
dm: avoid filesystem lookup in dm_get_dev_t()
Hans de Goede (2):
ACPI: scan: Make acpi_bus_get_device() clear return pointer on error
platform/x86: intel-vbtn: Drop HP Stream x360 Convertible PC 11 from
allow-list
JC Kuo (1):
xhci: tegra: Delay for disabling LFPS detector
Josef Bacik (1):
btrfs: fix lockdep splat in btrfs_recover_relocation
Lars-Peter Clausen (1):
iio: ad5504: Fix setting power-down state
Lecopzer Chen (2):
kasan: fix unaligned address is unhandled in kasan_remove_zero_shadow
kasan: fix incorrect arguments passing in kasan_add_zero_shadow
Longfang Liu (1):
USB: ehci: fix an interrupt calltrace error
Mathias Kresin (1):
irqchip/mips-cpu: Set IPI domain parent chip
Mathias Nyman (1):
xhci: make sure TRB is fully written before giving it to the
controller
Matteo Croce (2):
ipv6: create multicast route with RTPROT_KERNEL
ipv6: set multicast flag on the multicast route
Mikko Perttunen (1):
i2c: bpmp-tegra: Ignore unknown I2C_M flags
Mikulas Patocka (1):
dm integrity: fix a crash if "recalculate" used without
"internal_hash"
Necip Fazil Yildiran (1):
sh: dma: fix kconfig dependency for G2_DMA
Nilesh Javali (1):
scsi: qedi: Correct max length of CHAP secret
Pali Rohár (1):
serial: mvebu-uart: fix tx lost characters at power off
Pan Bian (1):
drm/atomic: put state on error path
Patrik Jakobsson (1):
usb: bdc: Make bdc pci driver depend on BROKEN
Peter Collingbourne (1):
mmc: core: don't initialize block size from ext_csd if not present
Peter Geis (1):
clk: tegra30: Add hda clock default rates to clock driver
Rafael J. Wysocki (1):
driver core: Extend device_is_dependent()
Ryan Chen (1):
usb: gadget: aspeed: fix stop dma register setting.
Seth Miller (1):
HID: Ignore battery for Elan touchscreen on ASUS UX550
Takashi Iwai (2):
ALSA: seq: oss: Fix missing error check in
snd_seq_oss_synth_make_info()
ALSA: hda/via: Add minimum mute flag
Tariq Toukan (1):
net: Disable NETIF_F_HW_TLS_RX when RXCSUM is disabled
Thinh Nguyen (1):
usb: udc: core: Use lock when write to soft_connect
Vincent Mailhol (3):
can: dev: can_restart: fix use after free bug
can: vxcan: vxcan_xmit: fix use after free bug
can: peak_usb: fix use after free bugs
Vladimir Oltean (1):
net: mscc: ocelot: allow offloading of bridge on top of LAG
Wang Hui (1):
stm class: Fix module init return on allocation failure
Wolfram Sang (1):
i2c: octeon: check correct size of maximum RECV_LEN packet
Makefile | 2 +-
arch/arm/xen/enlighten.c | 2 +-
arch/riscv/kernel/time.c | 3 +
arch/sh/drivers/dma/Kconfig | 3 +-
drivers/acpi/scan.c | 2 +
drivers/base/core.c | 17 +++-
drivers/clk/tegra/clk-tegra30.c | 2 +
drivers/gpu/drm/drm_atomic_helper.c | 2 +-
drivers/gpu/drm/nouveau/dispnv50/disp.c | 4 +-
drivers/gpu/drm/nouveau/dispnv50/disp.h | 2 +-
drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c | 2 +-
.../gpu/drm/nouveau/nvkm/subdev/bios/shadow.c | 2 +-
.../drm/nouveau/nvkm/subdev/i2c/auxgm200.c | 8 +-
.../gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c | 10 ++-
.../gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c | 10 ++-
.../gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 6 +-
drivers/hid/hid-ids.h | 1 +
drivers/hid/hid-input.c | 2 +
drivers/hwtracing/intel_th/pci.c | 5 ++
drivers/hwtracing/stm/heartbeat.c | 6 +-
drivers/i2c/busses/i2c-octeon-core.c | 2 +-
drivers/i2c/busses/i2c-tegra-bpmp.c | 2 +-
drivers/iio/dac/ad5504.c | 4 +-
drivers/irqchip/irq-mips-cpu.c | 7 ++
drivers/md/dm-integrity.c | 6 ++
drivers/md/dm-table.c | 15 +++-
drivers/mmc/core/queue.c | 4 +-
drivers/mmc/host/sdhci-xenon.c | 7 +-
drivers/net/can/dev.c | 4 +-
drivers/net/can/usb/peak_usb/pcan_usb_fd.c | 8 +-
drivers/net/can/vxcan.c | 6 +-
drivers/net/dsa/b53/b53_common.c | 2 +-
drivers/net/ethernet/mscc/ocelot.c | 4 +-
drivers/net/ethernet/renesas/sh_eth.c | 4 +-
drivers/platform/x86/intel-vbtn.c | 6 --
drivers/scsi/megaraid/megaraid_sas_base.c | 6 +-
drivers/scsi/qedi/qedi_main.c | 4 +-
drivers/scsi/ufs/ufshcd.c | 11 +--
drivers/tty/serial/mvebu-uart.c | 10 ++-
drivers/usb/gadget/udc/aspeed-vhub/epn.c | 5 +-
drivers/usb/gadget/udc/bdc/Kconfig | 2 +-
drivers/usb/gadget/udc/core.c | 13 ++-
drivers/usb/host/ehci-hcd.c | 12 +++
drivers/usb/host/ehci-hub.c | 3 +
drivers/usb/host/xhci-ring.c | 2 +
drivers/usb/host/xhci-tegra.c | 7 ++
drivers/xen/events/events_base.c | 10 ---
drivers/xen/platform-pci.c | 1 -
drivers/xen/xenbus/xenbus.h | 1 +
drivers/xen/xenbus/xenbus_comms.c | 8 --
drivers/xen/xenbus/xenbus_probe.c | 81 +++++++++++++++----
fs/btrfs/volumes.c | 2 +
include/xen/xenbus.h | 2 +-
mm/kasan/kasan_init.c | 23 +++---
net/core/dev.c | 5 ++
net/core/skbuff.c | 6 +-
net/ipv4/netfilter/ipt_rpfilter.c | 2 +-
net/ipv4/udp.c | 3 +-
net/ipv6/addrconf.c | 3 +-
net/sched/cls_tcindex.c | 8 +-
net/sched/sch_api.c | 3 +-
sound/core/seq/oss/seq_oss_synth.c | 3 +-
sound/pci/hda/patch_via.c | 1 +
sound/soc/intel/boards/haswell.c | 1 +
tools/testing/selftests/net/fib_tests.sh | 1 -
65 files changed, 284 insertions(+), 127 deletions(-)
--
2.25.1
1
57
Hi,
In Linux 5.8, Nick submitted a patch-set to fix some Errors in symbol search when you try to profile JIT compiled application in OpenJDK with perf.
The patchset is:
1e4bd2ae4564 perf jit: Fix inaccurate DWARF line table
7d7e503cac31 perf jvmti: Remove redundant jitdump line table entries
0bdf31811be0 perf jvmti: Fix demangling Java symbols
525c821de0a6 perf tests: Add test for the java demangler
959f8ed4c1a8 perf jvmti: Do not report error when missing debug information
953e92402a52 perf jvmti: Fix jitdump for methods without debug info
Because making perf work is very useful for performance analysis on JITed Java application, I'd like to support this patchset in openEuler release.
The coming 21.03 should be a first target as that release is upgraded with 5.8+ kernel.
Probably we can backport this patchset to 20.03 LTS if needed.
To introduce the fix to 21.03, the kernel.spec should be changed with some lines to generate the JVMTI shared library.
There are some things I am not sure:
1. kernel.spec is in src-openeuler project, do I submit the change by PR or by sending patch directly to the mailing list?
2. Is there rolling release of openEuler like Tumbleweed? Then we can run some tests with the daily ISO before the stable 21.03
And one more question, has version 21.03 created a build project in openEuler OBS??
Is it this one?
https://build.openeuler.org/package/show/openEuler:Mainline/kernel
Thanks,
Erik
3
5
Andrey Zhizhikin (1):
rndis_host: set proper input size for OID_GEN_PHYSICAL_MEDIUM request
Arnd Bergmann (1):
crypto: x86/crc32c - fix building with clang ias
Aya Levin (1):
net: ipv6: Validate GSO SKB before finish IPv6 processing
Baptiste Lepers (2):
udp: Prevent reuseport_select_sock from reading uninitialized socks
rxrpc: Call state should be read with READ_ONCE() under some
circumstances
David Howells (1):
rxrpc: Fix handling of an unsupported token type in rxrpc_read()
David Wu (1):
net: stmmac: Fixed mtu channged by cache aligned
Eric Dumazet (1):
net: avoid 32 x truesize under-estimation for tiny skbs
Greg Kroah-Hartman (1):
Linux 4.19.170
Hamish Martin (1):
usb: ohci: Make distrust_firmware param default to false
Hoang Le (1):
tipc: fix NULL deref in tipc_link_xmit()
Jakub Kicinski (1):
net: sit: unregister_netdevice on newlink's error path
Jason A. Donenfeld (2):
net: introduce skb_list_walk_safe for skb segment walking
net: skbuff: disambiguate argument and member for skb_list_walk_safe
helper
Manish Chopra (1):
netxen_nic: fix MSI/MSI-x interrupts
Michael Hennerich (1):
spi: cadence: cache reference clock rate during probe
Mikulas Patocka (1):
dm integrity: fix flush with external metadata device
Petr Machata (2):
net: dcb: Validate netlink message in DCB handler
net: dcb: Accept RTM_GETDCB messages carrying set-like DCB commands
Stefan Chulski (1):
net: mvpp2: Remove Pause and Asym_Pause support
Will Deacon (1):
compiler.h: Raise minimum version of GCC to 5.1 for arm64
Willem de Bruijn (1):
esp: avoid unneeded kmap_atomic call
Makefile | 2 +-
arch/x86/crypto/crc32c-pcl-intel-asm_64.S | 2 +-
drivers/md/dm-bufio.c | 6 +++
drivers/md/dm-integrity.c | 50 +++++++++++++++++--
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 -
.../ethernet/qlogic/netxen/netxen_nic_main.c | 7 +--
.../net/ethernet/stmicro/stmmac/stmmac_main.c | 3 +-
drivers/net/usb/rndis_host.c | 2 +-
drivers/spi/spi-cadence.c | 6 ++-
drivers/usb/host/ohci-hcd.c | 2 +-
include/linux/compiler-gcc.h | 6 +++
include/linux/dm-bufio.h | 1 +
include/linux/skbuff.h | 5 ++
net/core/skbuff.c | 9 +++-
net/core/sock_reuseport.c | 2 +-
net/dcb/dcbnl.c | 2 +
net/ipv4/esp4.c | 7 +--
net/ipv6/esp6.c | 7 +--
net/ipv6/ip6_output.c | 40 ++++++++++++++-
net/ipv6/sit.c | 5 +-
net/rxrpc/input.c | 2 +-
net/rxrpc/key.c | 6 ++-
net/tipc/link.c | 9 +++-
23 files changed, 141 insertions(+), 42 deletions(-)
--
2.25.1
1
23

28 Jan '21
> I think you should remove the VM_CHECKNODE to set_vm_checknode
> set_vm_checknode(vm_flags_t *vm_flags, unsigned long flags)
> {
> if (flags & VM_CHECKNODE && is_set_cdmmask())
> *vm_flags |= VM_CHECKNODE | ((((flags >> MAP_HUGE_SHIFT) &
> MAP_HUGE_MASK) << CHECKNODE_BITS) & CHECKNODE_MASK);
> }
VM_CHECKNODE has been checked before the set_vm_checknode function is used.
Maybe we can put the check into set_vm_checknode(). What do you think?
mm/mmap.c
if (flags & MAP_CHECKNODE)
set_vm_checknode(&vm_flags, flags);
Thanks,
Bixuan Cui
1
0

[PATCH kernel-4.19] clocksource/drivers/arch_timer: Fix vdso_fix compile error for arm32
by Yang Yingliang 28 Jan '21
by Yang Yingliang 28 Jan '21
28 Jan '21
From: Hanjun Guo <guohanjun(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 47461
CVE: NA
-------------------------------------------------
We got compile error:
drivers/clocksource/arm_arch_timer.c: In function
'arch_counter_register':
drivers/clocksource/arm_arch_timer.c:1009:31: error: 'struct
arch_clocksource_data' has no member named 'vdso_fix'
1009 | clocksource_counter.archdata.vdso_fix = vdso_fix;
| ^
make[3]: ***
[/builds/1mzfdQzleCy69KZFb5qHNSEgabZ/scripts/Makefile.build:303:
drivers/clocksource/arm_arch_timer.o] Error 1
make[3]: Target '__build' not remade because of errors.
Fix it by guarding vdso_fix with #ifdef CONFIG_ARM64 .. #endif
Signed-off-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/clocksource/arm_arch_timer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index d72ade5c70801..6847a5fe13fde 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -1010,7 +1010,9 @@ static void __init arch_counter_register(unsigned type)
arch_timer_read_counter = arch_counter_get_cntpct;
clocksource_counter.archdata.vdso_direct = vdso_default;
+#ifdef CONFIG_ARM64
clocksource_counter.archdata.vdso_fix = vdso_fix;
+#endif
} else {
arch_timer_read_counter = arch_counter_get_cntvct_mem;
}
--
2.25.1
1
0

28 Jan '21
From: miaoyubo <miaoyubo(a)huawei.com>
virt inclusion
category: bugfix
bugzilla: 47428
CVE: NA
Disable PUD_SIZE huge mapping on 161x serial boards and only
enable on 1620CS due to low performance on cache maintenance
and complex implementation for pre-setup stage2 page table with
PUD huge page. Which would even lead to hard lockup
Signed-off-by: Zenghui Yu <yuzenghui(a)huawei.com>
Signed-off-by: miaoyubo <miaoyubo(a)huawei.com>
Reviewed-by: zhanghailiang <zhang.zhanghailiang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
virt/kvm/arm/mmu.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3a74cf59f5c85..b030bd3dce52c 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1763,6 +1763,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (vma_pagesize == PMD_SIZE ||
(vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm)))
gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
+
+ /* Only enable PUD_SIZE huge mapping on 1620 serial boards */
+ if (vma_pagesize == PUD_SIZE && !kvm_ncsnp_support) {
+ vma_pagesize = PMD_SIZE;
+ gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
+ }
+
up_read(¤t->mm->mmap_sem);
/* We need minimum second+third level pages */
--
2.25.1
1
3
Akilesh Kailash (1):
dm snapshot: flush merged data before committing metadata
Al Viro (1):
dump_common_audit_data(): fix racy accesses to ->d_name
Alexander Lobakin (1):
MIPS: relocatable: fix possible boot hangup with KASLR enabled
Arnd Bergmann (2):
misdn: dsp: select CONFIG_BITREVERSE
ARM: picoxcell: fix missing interrupt-parent properties
Craig Tatlor (1):
drm/msm: Call msm_init_vram before binding the gpu
Dan Carpenter (1):
ASoC: Intel: fix error code cnl_set_dsp_D0()
Dave Wysochanski (1):
NFS4: Fix use-after-free in trace_event_raw_event_nfs4_set_lock
Dexuan Cui (1):
ACPI: scan: Harden acpi_device_add() against device ID overflows
Dinghao Liu (2):
RDMA/usnic: Fix memleak in find_free_vf_and_create_qp_grp
netfilter: nf_nat: Fix memleak in nf_nat_init
Filipe Manana (1):
btrfs: fix transaction leak and crash after RO remount caused by
qgroup rescan
Geert Uytterhoeven (2):
ALSA: firewire-tascam: Fix integer overflow in midi_port_work()
ALSA: fireface: Fix integer overflow in transmit_midi_msg()
Greg Kroah-Hartman (1):
Linux 4.19.169
Jan Kara (2):
bfq: Fix computation of shallow depth
ext4: fix superblock checksum failure when setting password salt
Jann Horn (1):
mm, slub: consider rest of partial list if acquire_slab() fails
Jerome Brunet (1):
ASoC: meson: axg-tdm-interface: fix loopback
Jesper Dangaard Brouer (1):
netfilter: conntrack: fix reading nf_conntrack_buckets
Leon Schuermann (1):
r8152: Add Lenovo Powered USB-C Travel Hub
Mark Bloch (1):
RDMA/mlx5: Fix wrong free of blue flame register on error
Masahiro Yamada (3):
ARC: build: remove non-existing bootpImage from KBUILD_IMAGE
ARC: build: add uImage.lzma to the top-level target
ARC: build: add boot_targets to PHONY
Masami Hiramatsu (1):
tracing/kprobes: Do the notrace functions check without kprobes on
ftrace
Miaohe Lin (1):
mm/hugetlb: fix potential missing huge page size info
Michael Ellerman (1):
net: ethernet: fs_enet: Add missing MODULE_LICENSE
Mike Snitzer (1):
dm: eliminate potential source of excessive kernel log noise
Mikulas Patocka (1):
dm integrity: fix the maximum number of arguments
Olaf Hering (1):
kbuild: enforce -Werror=return-type
Paul Cercueil (1):
MIPS: boot: Fix unaligned access with CONFIG_MIPS_RAW_APPENDED_DTB
Randy Dunlap (1):
arch/arc: add copy_user_page() to <asm/page.h> to fix build error on
ARC
Rasmus Villemoes (1):
ethernet: ucc_geth: fix definition and size of ucc_geth_tx_global_pram
Roberto Sassu (1):
ima: Remove __init annotation from ima_pcrread()
Shawn Guo (1):
ACPI: scan: add stub acpi_create_platform_device() for !CONFIG_ACPI
Thomas Hebb (1):
ASoC: dapm: remove widget from dirty list on free
Trond Myklebust (3):
pNFS: Mark layout for return if return-on-close was not sent
NFS/pNFS: Fix a leak of the layout 'plh_outstanding' counter
NFS: nfs_igrab_and_active must first reference the superblock
Wei Liu (1):
x86/hyperv: check cpu mask after interrupt has been disabled
Yang Yingliang (1):
config: set config CONFIG_KPROBE_EVENTS_ON_NOTRACE default value
j.nixdorf(a)avm.de (1):
net: sunrpc: interpret the return value of kstrtou32 correctly
Makefile | 4 ++--
arch/arc/Makefile | 9 ++-----
arch/arc/include/asm/page.h | 1 +
arch/arm/boot/dts/picoxcell-pc3x2.dtsi | 4 ++++
arch/arm64/configs/hulk_defconfig | 1 +
arch/mips/boot/compressed/decompress.c | 3 ++-
arch/mips/kernel/relocate.c | 10 ++++++--
arch/x86/hyperv/mmu.c | 12 +++++++---
block/bfq-iosched.c | 8 +++----
drivers/acpi/internal.h | 2 +-
drivers/acpi/scan.c | 15 +++++++++++-
drivers/gpu/drm/msm/msm_drv.c | 8 +++----
drivers/infiniband/hw/mlx5/main.c | 2 +-
drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 3 +++
drivers/isdn/mISDN/Kconfig | 1 +
drivers/md/dm-integrity.c | 2 +-
drivers/md/dm-snap.c | 24 +++++++++++++++++++
drivers/md/dm.c | 2 +-
.../ethernet/freescale/fs_enet/mii-bitbang.c | 1 +
.../net/ethernet/freescale/fs_enet/mii-fec.c | 1 +
drivers/net/ethernet/freescale/ucc_geth.h | 9 ++++++-
drivers/net/usb/cdc_ether.c | 7 ++++++
drivers/net/usb/r8152.c | 1 +
fs/btrfs/qgroup.c | 13 +++++++---
fs/btrfs/super.c | 8 +++++++
fs/ext4/ioctl.c | 3 +++
fs/nfs/internal.h | 12 ++++++----
fs/nfs/nfs4proc.c | 2 +-
fs/nfs/pnfs.c | 7 ++++++
include/linux/acpi.h | 7 ++++++
kernel/trace/Kconfig | 2 +-
kernel/trace/trace_kprobe.c | 2 +-
mm/hugetlb.c | 2 +-
mm/slub.c | 2 +-
net/netfilter/nf_conntrack_standalone.c | 3 +++
net/netfilter/nf_nat_core.c | 1 +
net/sunrpc/addr.c | 2 +-
security/integrity/ima/ima_crypto.c | 2 +-
security/lsm_audit.c | 7 ++++--
sound/firewire/fireface/ff-transaction.c | 2 +-
sound/firewire/tascam/tascam-transaction.c | 2 +-
sound/soc/intel/skylake/cnl-sst.c | 1 +
sound/soc/meson/axg-tdm-interface.c | 14 ++++++++++-
sound/soc/soc-dapm.c | 1 +
44 files changed, 176 insertions(+), 49 deletions(-)
--
2.25.1
1
43

27 Jan '21
hulk inclusion
category: bugfix
bugzilla: NA
CVE: NA
MAP_CHECKNODE was defined in uapi/asm-generic/mman.h, that was not
automatically included by mm/mmap.c when building on platforms such as
mips, and result in following compiling error:
mm/mmap.c: In function ‘__do_mmap’:
mm/mmap.c:1581:14: error: ‘MAP_CHECKNODE’ undeclared (first use in this function)
if (flags & MAP_CHECKNODE)
^
mm/mmap.c:1581:14: note: each undeclared identifier is reported only once for each function it appears in
scripts/Makefile.build:303: recipe for target 'mm/mmap.o' failed
Fixes: 56a22a261008 ("arm64/ascend: mm: Add MAP_CHECKNODE flag to check node hugetlb")
Signed-off-by: Bixuan Cui <cuibixuan(a)huawei.com>
---
arch/alpha/include/uapi/asm/mman.h | 1 +
arch/mips/include/uapi/asm/mman.h | 1 +
arch/parisc/include/uapi/asm/mman.h | 1 +
arch/powerpc/include/uapi/asm/mman.h | 1 +
arch/sparc/include/uapi/asm/mman.h | 1 +
arch/xtensa/include/uapi/asm/mman.h | 1 +
6 files changed, 6 insertions(+)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h
index b3acfc00..1c7ce27 100644
--- a/arch/alpha/include/uapi/asm/mman.h
+++ b/arch/alpha/include/uapi/asm/mman.h
@@ -34,6 +34,7 @@
#define MAP_HUGETLB 0x100000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x200000/* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#define MS_ASYNC 1 /* sync memory asynchronously */
#define MS_SYNC 2 /* synchronous memory sync */
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h
index 72a00c7..4570a54 100644
--- a/arch/mips/include/uapi/asm/mman.h
+++ b/arch/mips/include/uapi/asm/mman.h
@@ -52,6 +52,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
/*
* Flags for msync
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
index 9e989d6..06857eb 100644
--- a/arch/parisc/include/uapi/asm/mman.h
+++ b/arch/parisc/include/uapi/asm/mman.h
@@ -28,6 +28,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#define MS_SYNC 1 /* synchronous memory sync */
#define MS_ASYNC 2 /* sync memory asynchronously */
diff --git a/arch/powerpc/include/uapi/asm/mman.h b/arch/powerpc/include/uapi/asm/mman.h
index 95f884a..24354f7 100644
--- a/arch/powerpc/include/uapi/asm/mman.h
+++ b/arch/powerpc/include/uapi/asm/mman.h
@@ -30,6 +30,7 @@
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
/* Override any generic PKEY permission defines */
#define PKEY_DISABLE_EXECUTE 0x4
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 0d1881b..214abe1 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -27,6 +27,7 @@
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#endif /* _UAPI__SPARC_MMAN_H__ */
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h
index f584a59..2c9d705 100644
--- a/arch/xtensa/include/uapi/asm/mman.h
+++ b/arch/xtensa/include/uapi/asm/mman.h
@@ -59,6 +59,7 @@
#define MAP_HUGETLB 0x80000 /* create a huge page mapping */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
+#define MAP_CHECKNODE 0x800000 /* hugetlb numa node check */
#ifdef CONFIG_MMAP_ALLOW_UNINITIALIZED
# define MAP_UNINITIALIZED 0x4000000 /* For anonymous mmap, memory could be
* uninitialized */
--
2.7.4
1
0
From: zhoukang <zhoukang7(a)huawei.com>
euleros inclusion
category: bugfix
bugzilla: 46918
--------------------------------
EulerOS is compatible with openEuler KABI, use the same config;
So arch/arm64/configs/euleros_defconfig file is no longer used;
Signed-off-by: zhoukang <zhoukang7(a)huawei.com>
Acked-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/configs/euleros_defconfig | 5680 --------------------------
1 file changed, 5680 deletions(-)
delete mode 100644 arch/arm64/configs/euleros_defconfig
diff --git a/arch/arm64/configs/euleros_defconfig b/arch/arm64/configs/euleros_defconfig
deleted file mode 100644
index 53af3aa696386..0000000000000
--- a/arch/arm64/configs/euleros_defconfig
+++ /dev/null
@@ -1,5680 +0,0 @@
-#
-# Automatically generated file; DO NOT EDIT.
-# Linux/arm64 4.19.30 Kernel Configuration
-#
-
-#
-# Compiler: gcc (GCC) 7.3.0
-#
-CONFIG_CC_IS_GCC=y
-CONFIG_GCC_VERSION=70300
-CONFIG_CLANG_VERSION=0
-CONFIG_IRQ_WORK=y
-CONFIG_BUILDTIME_EXTABLE_SORT=y
-CONFIG_THREAD_INFO_IN_TASK=y
-
-#
-# General setup
-#
-CONFIG_INIT_ENV_ARG_LIMIT=32
-# CONFIG_COMPILE_TEST is not set
-CONFIG_LOCALVERSION=""
-# CONFIG_LOCALVERSION_AUTO is not set
-CONFIG_BUILD_SALT=""
-CONFIG_DEFAULT_HOSTNAME="(none)"
-CONFIG_SWAP=y
-CONFIG_SYSVIPC=y
-CONFIG_SYSVIPC_SYSCTL=y
-CONFIG_POSIX_MQUEUE=y
-CONFIG_POSIX_MQUEUE_SYSCTL=y
-CONFIG_CROSS_MEMORY_ATTACH=y
-# CONFIG_USELIB is not set
-CONFIG_AUDIT=y
-CONFIG_HAVE_ARCH_AUDITSYSCALL=y
-CONFIG_AUDITSYSCALL=y
-CONFIG_AUDIT_WATCH=y
-CONFIG_AUDIT_TREE=y
-
-#
-# IRQ subsystem
-#
-CONFIG_GENERIC_IRQ_PROBE=y
-CONFIG_GENERIC_IRQ_SHOW=y
-CONFIG_GENERIC_IRQ_SHOW_LEVEL=y
-CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
-CONFIG_GENERIC_IRQ_MIGRATION=y
-CONFIG_HARDIRQS_SW_RESEND=y
-CONFIG_IRQ_DOMAIN=y
-CONFIG_IRQ_DOMAIN_HIERARCHY=y
-CONFIG_GENERIC_MSI_IRQ=y
-CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
-CONFIG_HANDLE_DOMAIN_IRQ=y
-CONFIG_IRQ_FORCED_THREADING=y
-CONFIG_SPARSE_IRQ=y
-# CONFIG_GENERIC_IRQ_DEBUGFS is not set
-CONFIG_GENERIC_IRQ_MULTI_HANDLER=y
-CONFIG_ARCH_CLOCKSOURCE_DATA=y
-CONFIG_GENERIC_TIME_VSYSCALL=y
-CONFIG_GENERIC_CLOCKEVENTS=y
-CONFIG_ARCH_HAS_TICK_BROADCAST=y
-CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
-
-#
-# Timers subsystem
-#
-CONFIG_TICK_ONESHOT=y
-CONFIG_NO_HZ_COMMON=y
-# CONFIG_HZ_PERIODIC is not set
-# CONFIG_NO_HZ_IDLE is not set
-CONFIG_NO_HZ_FULL=y
-CONFIG_NO_HZ=y
-CONFIG_HIGH_RES_TIMERS=y
-# CONFIG_PREEMPT_NONE is not set
-CONFIG_PREEMPT_VOLUNTARY=y
-# CONFIG_PREEMPT is not set
-
-#
-# CPU/Task time and stats accounting
-#
-CONFIG_VIRT_CPU_ACCOUNTING=y
-CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
-CONFIG_IRQ_TIME_ACCOUNTING=y
-CONFIG_HAVE_SCHED_AVG_IRQ=y
-CONFIG_BSD_PROCESS_ACCT=y
-CONFIG_BSD_PROCESS_ACCT_V3=y
-CONFIG_TASKSTATS=y
-CONFIG_TASK_DELAY_ACCT=y
-CONFIG_TASK_XACCT=y
-CONFIG_TASK_IO_ACCOUNTING=y
-CONFIG_CPU_ISOLATION=y
-
-#
-# RCU Subsystem
-#
-CONFIG_TREE_RCU=y
-# CONFIG_RCU_EXPERT is not set
-CONFIG_SRCU=y
-CONFIG_TREE_SRCU=y
-CONFIG_RCU_STALL_COMMON=y
-CONFIG_RCU_NEED_SEGCBLIST=y
-CONFIG_CONTEXT_TRACKING=y
-# CONFIG_CONTEXT_TRACKING_FORCE is not set
-CONFIG_RCU_NOCB_CPU=y
-CONFIG_BUILD_BIN2C=y
-CONFIG_IKCONFIG=y
-CONFIG_IKCONFIG_PROC=y
-CONFIG_LOG_BUF_SHIFT=20
-CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
-CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
-CONFIG_GENERIC_SCHED_CLOCK=y
-CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
-CONFIG_ARCH_SUPPORTS_INT128=y
-CONFIG_NUMA_BALANCING=y
-CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
-CONFIG_CGROUPS=y
-CONFIG_PAGE_COUNTER=y
-CONFIG_MEMCG=y
-CONFIG_MEMCG_SWAP=y
-CONFIG_MEMCG_SWAP_ENABLED=y
-CONFIG_MEMCG_KMEM=y
-CONFIG_BLK_CGROUP=y
-# CONFIG_DEBUG_BLK_CGROUP is not set
-CONFIG_CGROUP_WRITEBACK=y
-CONFIG_CGROUP_SCHED=y
-CONFIG_FAIR_GROUP_SCHED=y
-CONFIG_CFS_BANDWIDTH=y
-CONFIG_RT_GROUP_SCHED=y
-CONFIG_CGROUP_PIDS=y
-CONFIG_CGROUP_RDMA=y
-CONFIG_CGROUP_FREEZER=y
-CONFIG_CGROUP_HUGETLB=y
-CONFIG_CPUSETS=y
-CONFIG_PROC_PID_CPUSET=y
-CONFIG_CGROUP_DEVICE=y
-CONFIG_CGROUP_CPUACCT=y
-CONFIG_CGROUP_PERF=y
-CONFIG_CGROUP_BPF=y
-# CONFIG_CGROUP_DEBUG is not set
-CONFIG_SOCK_CGROUP_DATA=y
-CONFIG_CGROUP_FILES=y
-CONFIG_NAMESPACES=y
-CONFIG_UTS_NS=y
-CONFIG_IPC_NS=y
-CONFIG_USER_NS=y
-CONFIG_PID_NS=y
-CONFIG_NET_NS=y
-CONFIG_CHECKPOINT_RESTORE=y
-CONFIG_SCHED_STEAL=y
-CONFIG_SCHED_AUTOGROUP=y
-# CONFIG_SYSFS_DEPRECATED is not set
-CONFIG_RELAY=y
-CONFIG_BLK_DEV_INITRD=y
-CONFIG_INITRAMFS_SOURCE=""
-CONFIG_RD_GZIP=y
-CONFIG_RD_BZIP2=y
-CONFIG_RD_LZMA=y
-CONFIG_RD_XZ=y
-CONFIG_RD_LZO=y
-CONFIG_RD_LZ4=y
-CONFIG_INITRAMFS_FILE_METADATA=""
-CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_SYSCTL=y
-CONFIG_ANON_INODES=y
-CONFIG_HAVE_UID16=y
-CONFIG_SYSCTL_EXCEPTION_TRACE=y
-CONFIG_BPF=y
-CONFIG_EXPERT=y
-CONFIG_UID16=y
-CONFIG_MULTIUSER=y
-# CONFIG_SGETMASK_SYSCALL is not set
-CONFIG_SYSFS_SYSCALL=y
-# CONFIG_SYSCTL_SYSCALL is not set
-CONFIG_FHANDLE=y
-CONFIG_POSIX_TIMERS=y
-CONFIG_PRINTK=y
-CONFIG_PRINTK_NMI=y
-CONFIG_BUG=y
-CONFIG_ELF_CORE=y
-CONFIG_BASE_FULL=y
-CONFIG_FUTEX=y
-CONFIG_FUTEX_PI=y
-CONFIG_EPOLL=y
-CONFIG_SIGNALFD=y
-CONFIG_TIMERFD=y
-CONFIG_EVENTFD=y
-CONFIG_SHMEM=y
-CONFIG_AIO=y
-CONFIG_ADVISE_SYSCALLS=y
-CONFIG_MEMBARRIER=y
-CONFIG_KALLSYMS=y
-CONFIG_KALLSYMS_ALL=y
-CONFIG_KALLSYMS_BASE_RELATIVE=y
-CONFIG_BPF_SYSCALL=y
-CONFIG_BPF_JIT_ALWAYS_ON=y
-CONFIG_USERFAULTFD=y
-CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
-CONFIG_RSEQ=y
-# CONFIG_DEBUG_RSEQ is not set
-# CONFIG_EMBEDDED is not set
-CONFIG_HAVE_PERF_EVENTS=y
-CONFIG_PERF_USE_VMALLOC=y
-# CONFIG_PC104 is not set
-
-#
-# Kernel Performance Events And Counters
-#
-CONFIG_PERF_EVENTS=y
-CONFIG_DEBUG_PERF_USE_VMALLOC=y
-CONFIG_VM_EVENT_COUNTERS=y
-CONFIG_SLUB_DEBUG=y
-# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
-# CONFIG_COMPAT_BRK is not set
-# CONFIG_SLAB is not set
-CONFIG_SLUB=y
-# CONFIG_SLOB is not set
-CONFIG_SLAB_MERGE_DEFAULT=y
-CONFIG_SLAB_FREELIST_RANDOM=y
-# CONFIG_SLAB_FREELIST_HARDENED is not set
-CONFIG_SLUB_CPU_PARTIAL=y
-CONFIG_SYSTEM_DATA_VERIFICATION=y
-CONFIG_PROFILING=y
-CONFIG_TRACEPOINTS=y
-CONFIG_ARM64=y
-CONFIG_64BIT=y
-CONFIG_MMU=y
-CONFIG_ARM64_PAGE_SHIFT=12
-CONFIG_ARM64_CONT_SHIFT=4
-CONFIG_ARCH_MMAP_RND_BITS_MIN=18
-CONFIG_ARCH_MMAP_RND_BITS_MAX=33
-CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11
-CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
-CONFIG_STACKTRACE_SUPPORT=y
-CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
-CONFIG_LOCKDEP_SUPPORT=y
-CONFIG_TRACE_IRQFLAGS_SUPPORT=y
-CONFIG_RWSEM_XCHGADD_ALGORITHM=y
-CONFIG_GENERIC_BUG=y
-CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
-CONFIG_GENERIC_HWEIGHT=y
-CONFIG_GENERIC_CSUM=y
-CONFIG_GENERIC_CALIBRATE_DELAY=y
-CONFIG_ZONE_DMA32=y
-CONFIG_HAVE_GENERIC_GUP=y
-CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
-CONFIG_SMP=y
-CONFIG_KERNEL_MODE_NEON=y
-CONFIG_FIX_EARLYCON_MEM=y
-CONFIG_PGTABLE_LEVELS=4
-CONFIG_ARCH_SUPPORTS_UPROBES=y
-CONFIG_ARCH_PROC_KCORE_TEXT=y
-
-#
-# Platform selection
-#
-# CONFIG_ARCH_ACTIONS is not set
-# CONFIG_ARCH_SUNXI is not set
-# CONFIG_ARCH_ALPINE is not set
-# CONFIG_ARCH_BCM2835 is not set
-# CONFIG_ARCH_BCM_IPROC is not set
-# CONFIG_ARCH_BERLIN is not set
-# CONFIG_ARCH_BRCMSTB is not set
-# CONFIG_ARCH_EXYNOS is not set
-# CONFIG_ARCH_K3 is not set
-# CONFIG_ARCH_LAYERSCAPE is not set
-# CONFIG_ARCH_LG1K is not set
-CONFIG_ARCH_HISI=y
-# CONFIG_ARCH_MEDIATEK is not set
-# CONFIG_ARCH_MESON is not set
-# CONFIG_ARCH_MVEBU is not set
-CONFIG_ARCH_QCOM=y
-# CONFIG_ARCH_REALTEK is not set
-# CONFIG_ARCH_ROCKCHIP is not set
-CONFIG_ARCH_SEATTLE=y
-# CONFIG_ARCH_SYNQUACER is not set
-# CONFIG_ARCH_RENESAS is not set
-# CONFIG_ARCH_STRATIX10 is not set
-# CONFIG_ARCH_TEGRA is not set
-# CONFIG_ARCH_SPRD is not set
-CONFIG_ARCH_THUNDER=y
-CONFIG_ARCH_THUNDER2=y
-# CONFIG_ARCH_UNIPHIER is not set
-CONFIG_ARCH_VEXPRESS=y
-CONFIG_ARCH_XGENE=y
-# CONFIG_ARCH_ZX is not set
-# CONFIG_ARCH_ZYNQMP is not set
-CONFIG_HAVE_LIVEPATCH_WO_FTRACE=y
-
-#
-# Enable Livepatch
-#
-CONFIG_LIVEPATCH=y
-CONFIG_LIVEPATCH_WO_FTRACE=y
-CONFIG_LIVEPATCH_STOP_MACHINE_CONSISTENCY=y
-# CONFIG_LIVEPATCH_STACK is not set
-CONFIG_LIVEPATCH_RESTRICT_KPROBE=y
-
-#
-# Bus support
-#
-CONFIG_PCI=y
-CONFIG_PCI_DOMAINS=y
-CONFIG_PCI_DOMAINS_GENERIC=y
-CONFIG_PCI_SYSCALL=y
-CONFIG_PCIEPORTBUS=y
-CONFIG_HOTPLUG_PCI_PCIE=y
-CONFIG_PCIEAER=y
-CONFIG_PCIEAER_INJECT=m
-CONFIG_PCIE_ECRC=y
-CONFIG_PCIEASPM=y
-# CONFIG_PCIEASPM_DEBUG is not set
-CONFIG_PCIEASPM_DEFAULT=y
-# CONFIG_PCIEASPM_POWERSAVE is not set
-# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
-# CONFIG_PCIEASPM_PERFORMANCE is not set
-CONFIG_PCIE_PME=y
-CONFIG_PCIE_DPC=y
-# CONFIG_PCIE_PTM is not set
-CONFIG_PCI_MSI=y
-CONFIG_PCI_MSI_IRQ_DOMAIN=y
-CONFIG_PCI_QUIRKS=y
-# CONFIG_PCI_DEBUG is not set
-# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
-CONFIG_PCI_STUB=y
-# CONFIG_PCI_PF_STUB is not set
-CONFIG_PCI_ATS=y
-CONFIG_PCI_ECAM=y
-CONFIG_PCI_IOV=y
-CONFIG_PCI_PRI=y
-CONFIG_PCI_PASID=y
-CONFIG_PCI_LABEL=y
-CONFIG_HOTPLUG_PCI=y
-CONFIG_HOTPLUG_PCI_ACPI=y
-CONFIG_HOTPLUG_PCI_ACPI_IBM=m
-# CONFIG_HOTPLUG_PCI_CPCI is not set
-CONFIG_HOTPLUG_PCI_SHPC=y
-
-#
-# PCI controller drivers
-#
-
-#
-# Cadence PCIe controllers support
-#
-# CONFIG_PCIE_CADENCE_HOST is not set
-# CONFIG_PCI_FTPCI100 is not set
-CONFIG_PCI_HOST_COMMON=y
-CONFIG_PCI_HOST_GENERIC=y
-# CONFIG_PCIE_XILINX is not set
-CONFIG_PCI_XGENE=y
-CONFIG_PCI_XGENE_MSI=y
-CONFIG_PCI_HOST_THUNDER_PEM=y
-CONFIG_PCI_HOST_THUNDER_ECAM=y
-
-#
-# DesignWare PCI Core Support
-#
-CONFIG_PCIE_DW=y
-CONFIG_PCIE_DW_HOST=y
-# CONFIG_PCIE_DW_PLAT_HOST is not set
-CONFIG_PCI_HISI=y
-# CONFIG_PCIE_QCOM is not set
-# CONFIG_PCIE_KIRIN is not set
-# CONFIG_PCIE_HISI_STB is not set
-
-#
-# PCI Endpoint
-#
-# CONFIG_PCI_ENDPOINT is not set
-
-#
-# PCI switch controller drivers
-#
-# CONFIG_PCI_SW_SWITCHTEC is not set
-
-#
-# Kernel Features
-#
-
-#
-# ARM errata workarounds via the alternatives framework
-#
-CONFIG_ARM64_ERRATUM_826319=y
-CONFIG_ARM64_ERRATUM_827319=y
-CONFIG_ARM64_ERRATUM_824069=y
-CONFIG_ARM64_ERRATUM_819472=y
-CONFIG_ARM64_ERRATUM_832075=y
-CONFIG_ARM64_ERRATUM_834220=y
-CONFIG_ARM64_ERRATUM_845719=y
-CONFIG_ARM64_ERRATUM_843419=y
-CONFIG_ARM64_ERRATUM_1024718=y
-# CONFIG_ARM64_ERRATUM_1463225 is not set
-# CONFIG_ARM64_ERRATUM_1542419 is not set
-CONFIG_CAVIUM_ERRATUM_22375=y
-CONFIG_CAVIUM_ERRATUM_23144=y
-CONFIG_CAVIUM_ERRATUM_23154=y
-CONFIG_CAVIUM_ERRATUM_27456=y
-CONFIG_CAVIUM_ERRATUM_30115=y
-CONFIG_QCOM_FALKOR_ERRATUM_1003=y
-CONFIG_QCOM_FALKOR_ERRATUM_1009=y
-CONFIG_QCOM_QDF2400_ERRATUM_0065=y
-CONFIG_SOCIONEXT_SYNQUACER_PREITS=y
-CONFIG_HISILICON_ERRATUM_161600802=y
-CONFIG_QCOM_FALKOR_ERRATUM_E1041=y
-CONFIG_ARM64_4K_PAGES=y
-# CONFIG_ARM64_16K_PAGES is not set
-# CONFIG_ARM64_64K_PAGES is not set
-# CONFIG_ARM64_VA_BITS_39 is not set
-CONFIG_ARM64_VA_BITS_48=y
-CONFIG_ARM64_VA_BITS=48
-CONFIG_ARM64_PA_BITS_48=y
-CONFIG_ARM64_PA_BITS=48
-# CONFIG_CPU_BIG_ENDIAN is not set
-CONFIG_SCHED_MC=y
-# CONFIG_SCHED_SMT is not set
-CONFIG_NR_CPUS=1024
-CONFIG_HOTPLUG_CPU=y
-CONFIG_ARM64_ERR_RECOV=y
-CONFIG_MPAM=y
-CONFIG_NUMA=y
-CONFIG_NODES_SHIFT=3
-CONFIG_NUMA_AWARE_SPINLOCKS=y
-CONFIG_USE_PERCPU_NUMA_NODE_ID=y
-CONFIG_HAVE_SETUP_PER_CPU_AREA=y
-CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
-CONFIG_HOLES_IN_ZONE=y
-# CONFIG_HZ_100 is not set
-CONFIG_HZ_250=y
-# CONFIG_HZ_300 is not set
-# CONFIG_HZ_1000 is not set
-CONFIG_HZ=250
-CONFIG_SCHED_HRTICK=y
-CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
-CONFIG_ARCH_HAS_HOLES_MEMORYMODEL=y
-CONFIG_ARCH_SPARSEMEM_ENABLE=y
-CONFIG_ARCH_SPARSEMEM_DEFAULT=y
-CONFIG_ARCH_SELECT_MEMORY_MODEL=y
-CONFIG_HAVE_ARCH_PFN_VALID=y
-CONFIG_HW_PERF_EVENTS=y
-CONFIG_SYS_SUPPORTS_HUGETLBFS=y
-CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
-CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
-CONFIG_SECCOMP=y
-CONFIG_PARAVIRT=y
-CONFIG_PARAVIRT_SPINLOCKS=y
-CONFIG_PARAVIRT_TIME_ACCOUNTING=y
-CONFIG_KEXEC=y
-CONFIG_CRASH_DUMP=y
-# CONFIG_XEN is not set
-CONFIG_FORCE_MAX_ZONEORDER=11
-CONFIG_UNMAP_KERNEL_AT_EL0=y
-CONFIG_HARDEN_BRANCH_PREDICTOR=y
-CONFIG_HARDEN_EL2_VECTORS=y
-CONFIG_ARM64_SSBD=y
-# CONFIG_ARMV8_DEPRECATED is not set
-# CONFIG_ARM64_SW_TTBR0_PAN is not set
-
-#
-# ARMv8.1 architectural features
-#
-CONFIG_ARM64_HW_AFDBM=y
-CONFIG_ARM64_PAN=y
-CONFIG_ARM64_LSE_ATOMICS=y
-CONFIG_ARM64_VHE=y
-
-#
-# ARMv8.2 architectural features
-#
-CONFIG_ARM64_UAO=y
-CONFIG_ARM64_PMEM=y
-CONFIG_ARM64_RAS_EXTN=y
-CONFIG_ARM64_SVE=y
-CONFIG_ARM64_MODULE_PLTS=y
-CONFIG_ARM64_PSEUDO_NMI=y
-CONFIG_RELOCATABLE=y
-CONFIG_RANDOMIZE_BASE=y
-CONFIG_RANDOMIZE_MODULE_REGION_FULL=y
-CONFIG_ARM64_CNP=n
-
-#
-# ARMv8.4 architectural features
-#
-# CONFIG_ARM64_TLB_RANGE is not set
-
-#
-# ARMv8.5 architectural features
-#
-# CONFIG_ARCH_RANDOM is not set
-# CONFIG_ARM64_E0PD is not set
-
-#
-# Boot options
-#
-CONFIG_ARM64_ACPI_PARKING_PROTOCOL=y
-CONFIG_CMDLINE="console=ttyAMA0"
-# CONFIG_CMDLINE_FORCE is not set
-CONFIG_EFI_STUB=y
-CONFIG_EFI=y
-CONFIG_DMI=y
-CONFIG_COMPAT=y
-CONFIG_SYSVIPC_COMPAT=y
-
-#
-# Power management options
-#
-CONFIG_SUSPEND=y
-CONFIG_SUSPEND_FREEZER=y
-# CONFIG_SUSPEND_SKIP_SYNC is not set
-CONFIG_HIBERNATE_CALLBACKS=y
-CONFIG_HIBERNATION=y
-CONFIG_PM_STD_PARTITION=""
-CONFIG_PM_SLEEP=y
-CONFIG_PM_SLEEP_SMP=y
-# CONFIG_PM_AUTOSLEEP is not set
-# CONFIG_PM_WAKELOCKS is not set
-CONFIG_PM=y
-CONFIG_PM_DEBUG=y
-# CONFIG_PM_ADVANCED_DEBUG is not set
-# CONFIG_PM_TEST_SUSPEND is not set
-CONFIG_PM_SLEEP_DEBUG=y
-# CONFIG_DPM_WATCHDOG is not set
-CONFIG_PM_CLK=y
-CONFIG_PM_GENERIC_DOMAINS=y
-# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
-CONFIG_PM_GENERIC_DOMAINS_SLEEP=y
-CONFIG_PM_GENERIC_DOMAINS_OF=y
-CONFIG_CPU_PM=y
-CONFIG_ARCH_HIBERNATION_POSSIBLE=y
-CONFIG_ARCH_HIBERNATION_HEADER=y
-CONFIG_ARCH_SUSPEND_POSSIBLE=y
-
-#
-# CPU Power Management
-#
-
-#
-# CPU Idle
-#
-CONFIG_CPU_IDLE=y
-# CONFIG_CPU_IDLE_GOV_LADDER is not set
-CONFIG_CPU_IDLE_GOV_MENU=y
-# CONFIG_CPU_IDLE_GOV_TEO is not set
-# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
-
-#
-# ARM CPU Idle Drivers
-#
-# CONFIG_ARM_CPUIDLE is not set
-# CONFIG_HALTPOLL_CPUIDLE is not set
-
-#
-# CPU Frequency scaling
-#
-CONFIG_CPU_FREQ=y
-CONFIG_CPU_FREQ_GOV_ATTR_SET=y
-CONFIG_CPU_FREQ_GOV_COMMON=y
-CONFIG_CPU_FREQ_STAT=y
-CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
-# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
-# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
-# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
-# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
-# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
-CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
-CONFIG_CPU_FREQ_GOV_POWERSAVE=y
-CONFIG_CPU_FREQ_GOV_USERSPACE=y
-CONFIG_CPU_FREQ_GOV_ONDEMAND=y
-CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
-# CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set
-
-#
-# CPU frequency scaling drivers
-#
-# CONFIG_CPUFREQ_DT is not set
-CONFIG_ACPI_CPPC_CPUFREQ=y
-CONFIG_HISILICON_CPPC_CPUFREQ_WORKAROUND=y
-# CONFIG_ARM_BIG_LITTLE_CPUFREQ is not set
-CONFIG_ARM_SCPI_CPUFREQ=m
-# CONFIG_QORIQ_CPUFREQ is not set
-
-#
-# Firmware Drivers
-#
-CONFIG_ARM_PSCI_FW=y
-# CONFIG_ARM_PSCI_CHECKER is not set
-# CONFIG_ARM_SCMI_PROTOCOL is not set
-CONFIG_ARM_SCPI_PROTOCOL=m
-CONFIG_ARM_SCPI_POWER_DOMAIN=m
-CONFIG_ARM_SDE_INTERFACE=y
-# CONFIG_FIRMWARE_MEMMAP is not set
-CONFIG_DMIID=y
-CONFIG_DMI_SYSFS=y
-CONFIG_FW_CFG_SYSFS=y
-# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
-CONFIG_HAVE_ARM_SMCCC=y
-# CONFIG_GOOGLE_FIRMWARE is not set
-
-#
-# EFI (Extensible Firmware Interface) Support
-#
-CONFIG_EFI_VARS=y
-CONFIG_EFI_ESRT=y
-CONFIG_EFI_VARS_PSTORE=y
-CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
-CONFIG_EFI_PARAMS_FROM_FDT=y
-CONFIG_EFI_RUNTIME_WRAPPERS=y
-CONFIG_EFI_ARMSTUB=y
-CONFIG_EFI_ARMSTUB_DTB_LOADER=y
-# CONFIG_EFI_BOOTLOADER_CONTROL is not set
-# CONFIG_EFI_CAPSULE_LOADER is not set
-# CONFIG_EFI_TEST is not set
-# CONFIG_RESET_ATTACK_MITIGATION is not set
-CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y
-CONFIG_UEFI_CPER=y
-CONFIG_UEFI_CPER_ARM=y
-
-#
-# Tegra firmware driver
-#
-CONFIG_ARCH_SUPPORTS_ACPI=y
-CONFIG_ACPI=y
-CONFIG_ACPI_GENERIC_GSI=y
-CONFIG_ACPI_CCA_REQUIRED=y
-# CONFIG_ACPI_DEBUGGER is not set
-CONFIG_ACPI_SPCR_TABLE=y
-# CONFIG_ACPI_EC_DEBUGFS is not set
-CONFIG_ACPI_BUTTON=y
-CONFIG_ACPI_FAN=y
-# CONFIG_ACPI_TAD is not set
-# CONFIG_ACPI_DOCK is not set
-CONFIG_ACPI_PROCESSOR_IDLE=y
-CONFIG_ACPI_MCFG=y
-CONFIG_ACPI_CPPC_LIB=y
-CONFIG_ACPI_PROCESSOR=y
-CONFIG_ACPI_IPMI=m
-CONFIG_ACPI_HOTPLUG_CPU=y
-CONFIG_ACPI_THERMAL=y
-CONFIG_ACPI_NUMA=y
-CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
-CONFIG_ACPI_TABLE_UPGRADE=y
-# CONFIG_ACPI_DEBUG is not set
-CONFIG_ACPI_PCI_SLOT=y
-CONFIG_ACPI_CONTAINER=y
-# CONFIG_ACPI_HOTPLUG_MEMORY is not set
-CONFIG_ACPI_HED=y
-# CONFIG_ACPI_CUSTOM_METHOD is not set
-# CONFIG_ACPI_BGRT is not set
-CONFIG_ACPI_REDUCED_HARDWARE_ONLY=y
-# CONFIG_ACPI_NFIT is not set
-CONFIG_HAVE_ACPI_APEI=y
-CONFIG_ACPI_APEI=y
-CONFIG_ACPI_APEI_GHES=y
-CONFIG_ACPI_APEI_PCIEAER=y
-CONFIG_ACPI_APEI_SEA=y
-CONFIG_ACPI_APEI_MEMORY_FAILURE=y
-CONFIG_ACPI_APEI_EINJ=m
-# CONFIG_ACPI_APEI_ERST_DEBUG is not set
-# CONFIG_PMIC_OPREGION is not set
-# CONFIG_ACPI_CONFIGFS is not set
-CONFIG_ACPI_IORT=y
-CONFIG_ACPI_GTDT=y
-CONFIG_ACPI_PPTT=y
-CONFIG_HAVE_KVM_IRQCHIP=y
-CONFIG_HAVE_KVM_IRQFD=y
-CONFIG_HAVE_KVM_IRQ_ROUTING=y
-CONFIG_HAVE_KVM_EVENTFD=y
-CONFIG_KVM_MMIO=y
-CONFIG_HAVE_KVM_MSI=y
-CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
-CONFIG_KVM_VFIO=y
-CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL=y
-CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
-CONFIG_HAVE_KVM_IRQ_BYPASS=y
-CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE=y
-CONFIG_IRQ_BYPASS_MANAGER=y
-CONFIG_VIRTUALIZATION=y
-CONFIG_KVM=y
-CONFIG_KVM_ARM_HOST=y
-CONFIG_KVM_ARM_PMU=y
-CONFIG_KVM_INDIRECT_VECTORS=y
-CONFIG_VHOST_NET=m
-# CONFIG_VHOST_SCSI is not set
-CONFIG_VHOST_VSOCK=m
-CONFIG_VHOST=m
-# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set
-CONFIG_ARM64_CRYPTO=y
-CONFIG_CRYPTO_SHA256_ARM64=m
-# CONFIG_CRYPTO_SHA512_ARM64 is not set
-CONFIG_CRYPTO_SHA1_ARM64_CE=m
-CONFIG_CRYPTO_SHA2_ARM64_CE=m
-# CONFIG_CRYPTO_SHA512_ARM64_CE is not set
-# CONFIG_CRYPTO_SHA3_ARM64 is not set
-# CONFIG_CRYPTO_SM3_ARM64_CE is not set
-CONFIG_CRYPTO_SM4_ARM64_CE=m
-CONFIG_CRYPTO_GHASH_ARM64_CE=m
-# CONFIG_CRYPTO_CRCT10DIF_ARM64_CE is not set
-CONFIG_CRYPTO_CRC32_ARM64_CE=m
-CONFIG_CRYPTO_AES_ARM64=y
-CONFIG_CRYPTO_AES_ARM64_CE=m
-CONFIG_CRYPTO_AES_ARM64_CE_CCM=m
-CONFIG_CRYPTO_AES_ARM64_CE_BLK=m
-CONFIG_CRYPTO_AES_ARM64_NEON_BLK=m
-# CONFIG_CRYPTO_CHACHA20_NEON is not set
-# CONFIG_CRYPTO_AES_ARM64_BS is not set
-
-#
-# General architecture-dependent options
-#
-CONFIG_CRASH_CORE=y
-CONFIG_KEXEC_CORE=y
-CONFIG_OPROFILE_NMI_TIMER=y
-CONFIG_KPROBES=y
-CONFIG_JUMP_LABEL=y
-CONFIG_STATIC_KEYS_SELFTEST=y
-CONFIG_UPROBES=y
-CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
-CONFIG_KRETPROBES=y
-CONFIG_HAVE_KPROBES=y
-CONFIG_HAVE_KRETPROBES=y
-CONFIG_HAVE_NMI=y
-CONFIG_HAVE_ARCH_TRACEHOOK=y
-CONFIG_HAVE_DMA_CONTIGUOUS=y
-CONFIG_GENERIC_SMP_IDLE_THREAD=y
-CONFIG_GENERIC_IDLE_POLL_SETUP=y
-CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
-CONFIG_ARCH_HAS_SET_MEMORY=y
-CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
-CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
-CONFIG_HAVE_RSEQ=y
-CONFIG_HAVE_CLK=y
-CONFIG_HAVE_HW_BREAKPOINT=y
-CONFIG_HAVE_PERF_EVENTS_NMI=y
-CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
-CONFIG_HAVE_PERF_REGS=y
-CONFIG_HAVE_PERF_USER_STACK_DUMP=y
-CONFIG_HAVE_ARCH_JUMP_LABEL=y
-CONFIG_HAVE_RCU_TABLE_FREE=y
-CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
-CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
-CONFIG_HAVE_CMPXCHG_LOCAL=y
-CONFIG_HAVE_CMPXCHG_DOUBLE=y
-CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
-CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
-CONFIG_SECCOMP_FILTER=y
-CONFIG_HAVE_STACKPROTECTOR=y
-CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
-CONFIG_STACKPROTECTOR=y
-CONFIG_STACKPROTECTOR_STRONG=y
-CONFIG_HAVE_CONTEXT_TRACKING=y
-CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
-CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
-CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
-CONFIG_HAVE_ARCH_HUGE_VMAP=y
-CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
-CONFIG_MODULES_USE_ELF_RELA=y
-CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
-CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
-CONFIG_ARCH_MMAP_RND_BITS=18
-CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
-CONFIG_ARCH_MMAP_RND_COMPAT_BITS=11
-CONFIG_CLONE_BACKWARDS=y
-CONFIG_OLD_SIGSUSPEND3=y
-CONFIG_COMPAT_OLD_SIGACTION=y
-CONFIG_COMPAT_32BIT_TIME=y
-CONFIG_HAVE_ARCH_VMAP_STACK=y
-# CONFIG_VMAP_STACK is not set
-CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
-CONFIG_STRICT_KERNEL_RWX=y
-CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
-CONFIG_STRICT_MODULE_RWX=y
-CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
-
-#
-# GCOV-based kernel profiling
-#
-# CONFIG_GCOV_KERNEL is not set
-CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
-CONFIG_PLUGIN_HOSTCC=""
-CONFIG_HAVE_GCC_PLUGINS=y
-CONFIG_RT_MUTEXES=y
-CONFIG_BASE_SMALL=0
-CONFIG_MODULES=y
-CONFIG_MODULE_FORCE_LOAD=y
-CONFIG_MODULE_UNLOAD=y
-# CONFIG_MODULE_FORCE_UNLOAD is not set
-CONFIG_MODVERSIONS=y
-CONFIG_MODULE_SRCVERSION_ALL=y
-CONFIG_MODULE_SIG=y
-# CONFIG_MODULE_SIG_FORCE is not set
-CONFIG_MODULE_SIG_ALL=y
-# CONFIG_MODULE_SIG_SHA1 is not set
-# CONFIG_MODULE_SIG_SHA224 is not set
-CONFIG_MODULE_SIG_SHA256=y
-# CONFIG_MODULE_SIG_SHA384 is not set
-# CONFIG_MODULE_SIG_SHA512 is not set
-CONFIG_MODULE_SIG_HASH="sha256"
-# CONFIG_MODULE_COMPRESS is not set
-# CONFIG_TRIM_UNUSED_KSYMS is not set
-CONFIG_MODULES_TREE_LOOKUP=y
-CONFIG_BLOCK=y
-CONFIG_BLK_SCSI_REQUEST=y
-CONFIG_BLK_DEV_BSG=y
-CONFIG_BLK_DEV_BSGLIB=y
-CONFIG_BLK_DEV_INTEGRITY=y
-CONFIG_BLK_DEV_ZONED=y
-CONFIG_BLK_DEV_THROTTLING=y
-# CONFIG_BLK_DEV_THROTTLING_LOW is not set
-# CONFIG_BLK_CMDLINE_PARSER is not set
-CONFIG_BLK_WBT=y
-# CONFIG_BLK_CGROUP_IOLATENCY is not set
-# CONFIG_BLK_WBT_SQ is not set
-# CONFIG_BLK_CGROUP_IOCOST is not set
-CONFIG_BLK_WBT_MQ=y
-CONFIG_BLK_DEBUG_FS=y
-CONFIG_BLK_DEBUG_FS_ZONED=y
-# CONFIG_BLK_SED_OPAL is not set
-
-#
-# Partition Types
-#
-CONFIG_PARTITION_ADVANCED=y
-# CONFIG_ACORN_PARTITION is not set
-# CONFIG_AIX_PARTITION is not set
-CONFIG_OSF_PARTITION=y
-CONFIG_AMIGA_PARTITION=y
-# CONFIG_ATARI_PARTITION is not set
-CONFIG_MAC_PARTITION=y
-CONFIG_MSDOS_PARTITION=y
-CONFIG_BSD_DISKLABEL=y
-CONFIG_MINIX_SUBPARTITION=y
-CONFIG_SOLARIS_X86_PARTITION=y
-CONFIG_UNIXWARE_DISKLABEL=y
-# CONFIG_LDM_PARTITION is not set
-CONFIG_SGI_PARTITION=y
-# CONFIG_ULTRIX_PARTITION is not set
-CONFIG_SUN_PARTITION=y
-CONFIG_KARMA_PARTITION=y
-CONFIG_EFI_PARTITION=y
-# CONFIG_SYSV68_PARTITION is not set
-# CONFIG_CMDLINE_PARTITION is not set
-CONFIG_BLOCK_COMPAT=y
-CONFIG_BLK_MQ_PCI=y
-CONFIG_BLK_MQ_VIRTIO=y
-CONFIG_BLK_MQ_RDMA=y
-
-#
-# IO Schedulers
-#
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
-CONFIG_CFQ_GROUP_IOSCHED=y
-CONFIG_DEFAULT_DEADLINE=y
-# CONFIG_DEFAULT_CFQ is not set
-# CONFIG_DEFAULT_NOOP is not set
-CONFIG_DEFAULT_IOSCHED="deadline"
-CONFIG_MQ_IOSCHED_DEADLINE=y
-CONFIG_MQ_IOSCHED_KYBER=y
-CONFIG_IOSCHED_BFQ=y
-CONFIG_BFQ_GROUP_IOSCHED=y
-CONFIG_PREEMPT_NOTIFIERS=y
-CONFIG_PADATA=y
-CONFIG_ASN1=y
-CONFIG_ARCH_INLINE_SPIN_TRYLOCK=y
-CONFIG_ARCH_INLINE_SPIN_TRYLOCK_BH=y
-CONFIG_ARCH_INLINE_SPIN_LOCK=y
-CONFIG_ARCH_INLINE_SPIN_LOCK_BH=y
-CONFIG_ARCH_INLINE_SPIN_LOCK_IRQ=y
-CONFIG_ARCH_INLINE_SPIN_LOCK_IRQSAVE=y
-CONFIG_ARCH_INLINE_SPIN_UNLOCK=y
-CONFIG_ARCH_INLINE_SPIN_UNLOCK_BH=y
-CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQ=y
-CONFIG_ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE=y
-CONFIG_ARCH_INLINE_READ_LOCK=y
-CONFIG_ARCH_INLINE_READ_LOCK_BH=y
-CONFIG_ARCH_INLINE_READ_LOCK_IRQ=y
-CONFIG_ARCH_INLINE_READ_LOCK_IRQSAVE=y
-CONFIG_ARCH_INLINE_READ_UNLOCK=y
-CONFIG_ARCH_INLINE_READ_UNLOCK_BH=y
-CONFIG_ARCH_INLINE_READ_UNLOCK_IRQ=y
-CONFIG_ARCH_INLINE_READ_UNLOCK_IRQRESTORE=y
-CONFIG_ARCH_INLINE_WRITE_LOCK=y
-CONFIG_ARCH_INLINE_WRITE_LOCK_BH=y
-CONFIG_ARCH_INLINE_WRITE_LOCK_IRQ=y
-CONFIG_ARCH_INLINE_WRITE_LOCK_IRQSAVE=y
-CONFIG_ARCH_INLINE_WRITE_UNLOCK=y
-CONFIG_ARCH_INLINE_WRITE_UNLOCK_BH=y
-CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQ=y
-CONFIG_ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE=y
-CONFIG_INLINE_SPIN_TRYLOCK=y
-CONFIG_INLINE_SPIN_TRYLOCK_BH=y
-CONFIG_INLINE_SPIN_LOCK=y
-CONFIG_INLINE_SPIN_LOCK_BH=y
-CONFIG_INLINE_SPIN_LOCK_IRQ=y
-CONFIG_INLINE_SPIN_LOCK_IRQSAVE=y
-CONFIG_INLINE_SPIN_UNLOCK_BH=y
-CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
-CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE=y
-CONFIG_INLINE_READ_LOCK=y
-CONFIG_INLINE_READ_LOCK_BH=y
-CONFIG_INLINE_READ_LOCK_IRQ=y
-CONFIG_INLINE_READ_LOCK_IRQSAVE=y
-CONFIG_INLINE_READ_UNLOCK=y
-CONFIG_INLINE_READ_UNLOCK_BH=y
-CONFIG_INLINE_READ_UNLOCK_IRQ=y
-CONFIG_INLINE_READ_UNLOCK_IRQRESTORE=y
-CONFIG_INLINE_WRITE_LOCK=y
-CONFIG_INLINE_WRITE_LOCK_BH=y
-CONFIG_INLINE_WRITE_LOCK_IRQ=y
-CONFIG_INLINE_WRITE_LOCK_IRQSAVE=y
-CONFIG_INLINE_WRITE_UNLOCK=y
-CONFIG_INLINE_WRITE_UNLOCK_BH=y
-CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
-CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE=y
-CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
-CONFIG_MUTEX_SPIN_ON_OWNER=y
-CONFIG_RWSEM_SPIN_ON_OWNER=y
-CONFIG_LOCK_SPIN_ON_OWNER=y
-CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
-CONFIG_QUEUED_SPINLOCKS=y
-CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
-CONFIG_QUEUED_RWLOCKS=y
-CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
-CONFIG_FREEZER=y
-
-#
-# Executable file formats
-#
-CONFIG_BINFMT_ELF=y
-CONFIG_COMPAT_BINFMT_ELF=y
-CONFIG_ELFCORE=y
-CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
-CONFIG_BINFMT_SCRIPT=y
-CONFIG_BINFMT_MISC=m
-CONFIG_COREDUMP=y
-
-#
-# Memory Management options
-#
-CONFIG_SELECT_MEMORY_MODEL=y
-CONFIG_SPARSEMEM_MANUAL=y
-CONFIG_SPARSEMEM=y
-CONFIG_NEED_MULTIPLE_NODES=y
-CONFIG_HAVE_MEMORY_PRESENT=y
-CONFIG_SPARSEMEM_EXTREME=y
-CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
-CONFIG_SPARSEMEM_VMEMMAP=y
-CONFIG_HAVE_MEMBLOCK=y
-CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
-CONFIG_NO_BOOTMEM=y
-CONFIG_MEMORY_ISOLATION=y
-CONFIG_MEMORY_HOTPLUG=y
-CONFIG_MEMORY_HOTPLUG_SPARSE=y
-# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
-CONFIG_SPLIT_PTLOCK_CPUS=4
-CONFIG_MEMORY_BALLOON=y
-CONFIG_BALLOON_COMPACTION=y
-CONFIG_COMPACTION=y
-CONFIG_MIGRATION=y
-CONFIG_PHYS_ADDR_T_64BIT=y
-CONFIG_MMU_NOTIFIER=y
-CONFIG_KSM=y
-CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
-CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
-CONFIG_MEMORY_FAILURE=y
-CONFIG_HWPOISON_INJECT=m
-CONFIG_TRANSPARENT_HUGEPAGE=y
-CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
-# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
-CONFIG_TRANSPARENT_HUGE_PAGECACHE=y
-CONFIG_CLEANCACHE=y
-CONFIG_FRONTSWAP=y
-CONFIG_SHRINK_PAGECACHE=y
-CONFIG_CMA=y
-# CONFIG_CMA_DEBUG is not set
-# CONFIG_CMA_DEBUGFS is not set
-CONFIG_CMA_AREAS=7
-CONFIG_ZSWAP=y
-CONFIG_ZPOOL=y
-CONFIG_ZBUD=y
-# CONFIG_Z3FOLD is not set
-CONFIG_ZSMALLOC=y
-# CONFIG_PGTABLE_MAPPING is not set
-CONFIG_ZSMALLOC_STAT=y
-CONFIG_GENERIC_EARLY_IOREMAP=y
-# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
-CONFIG_IDLE_PAGE_TRACKING=y
-# CONFIG_PERCPU_STATS is not set
-# CONFIG_GUP_BENCHMARK is not set
-CONFIG_ARCH_HAS_PTE_SPECIAL=y
-CONFIG_NET=y
-CONFIG_COMPAT_NETLINK_MESSAGES=y
-CONFIG_NET_INGRESS=y
-CONFIG_NET_EGRESS=y
-
-#
-# Networking options
-#
-CONFIG_PACKET=y
-CONFIG_PACKET_DIAG=m
-CONFIG_UNIX=y
-CONFIG_UNIX_DIAG=m
-CONFIG_TLS=m
-CONFIG_TLS_DEVICE=y
-CONFIG_XFRM=y
-CONFIG_XFRM_OFFLOAD=y
-CONFIG_XFRM_ALGO=y
-CONFIG_XFRM_USER=y
-# CONFIG_XFRM_INTERFACE is not set
-CONFIG_XFRM_SUB_POLICY=y
-CONFIG_XFRM_MIGRATE=y
-CONFIG_XFRM_STATISTICS=y
-CONFIG_XFRM_IPCOMP=m
-CONFIG_NET_KEY=m
-CONFIG_NET_KEY_MIGRATE=y
-# CONFIG_SMC is not set
-# CONFIG_XDP_SOCKETS is not set
-CONFIG_INET=y
-CONFIG_IP_MULTICAST=y
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_FIB_TRIE_STATS=y
-CONFIG_IP_MULTIPLE_TABLES=y
-CONFIG_IP_ROUTE_MULTIPATH=y
-CONFIG_IP_ROUTE_VERBOSE=y
-CONFIG_IP_ROUTE_CLASSID=y
-# CONFIG_IP_PNP is not set
-CONFIG_NET_IPIP=m
-CONFIG_NET_IPGRE_DEMUX=m
-CONFIG_NET_IP_TUNNEL=m
-CONFIG_NET_IPGRE=m
-CONFIG_NET_IPGRE_BROADCAST=y
-CONFIG_IP_MROUTE_COMMON=y
-CONFIG_IP_MROUTE=y
-CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
-CONFIG_IP_PIMSM_V1=y
-CONFIG_IP_PIMSM_V2=y
-CONFIG_SYN_COOKIES=y
-CONFIG_NET_IPVTI=m
-CONFIG_NET_UDP_TUNNEL=m
-# CONFIG_NET_FOU is not set
-# CONFIG_NET_FOU_IP_TUNNELS is not set
-CONFIG_INET_AH=m
-CONFIG_INET_ESP=m
-CONFIG_INET_ESP_OFFLOAD=m
-CONFIG_INET_IPCOMP=m
-CONFIG_INET_XFRM_TUNNEL=m
-CONFIG_INET_TUNNEL=m
-CONFIG_INET_XFRM_MODE_TRANSPORT=m
-CONFIG_INET_XFRM_MODE_TUNNEL=m
-CONFIG_INET_XFRM_MODE_BEET=m
-CONFIG_INET_DIAG=m
-CONFIG_INET_TCP_DIAG=m
-CONFIG_INET_UDP_DIAG=m
-CONFIG_INET_RAW_DIAG=m
-# CONFIG_INET_DIAG_DESTROY is not set
-CONFIG_TCP_CONG_ADVANCED=y
-CONFIG_TCP_CONG_BIC=m
-CONFIG_TCP_CONG_CUBIC=y
-CONFIG_TCP_CONG_WESTWOOD=m
-CONFIG_TCP_CONG_HTCP=m
-CONFIG_TCP_CONG_HSTCP=m
-CONFIG_TCP_CONG_HYBLA=m
-CONFIG_TCP_CONG_VEGAS=m
-CONFIG_TCP_CONG_NV=m
-CONFIG_TCP_CONG_SCALABLE=m
-CONFIG_TCP_CONG_LP=m
-CONFIG_TCP_CONG_VENO=m
-CONFIG_TCP_CONG_YEAH=m
-CONFIG_TCP_CONG_ILLINOIS=m
-CONFIG_TCP_CONG_DCTCP=m
-# CONFIG_TCP_CONG_CDG is not set
-CONFIG_TCP_CONG_BBR=m
-CONFIG_DEFAULT_CUBIC=y
-# CONFIG_DEFAULT_RENO is not set
-CONFIG_DEFAULT_TCP_CONG="cubic"
-CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6=y
-CONFIG_IPV6_ROUTER_PREF=y
-CONFIG_IPV6_ROUTE_INFO=y
-CONFIG_IPV6_OPTIMISTIC_DAD=y
-CONFIG_INET6_AH=m
-CONFIG_INET6_ESP=m
-CONFIG_INET6_ESP_OFFLOAD=m
-CONFIG_INET6_IPCOMP=m
-CONFIG_IPV6_MIP6=m
-# CONFIG_IPV6_ILA is not set
-CONFIG_INET6_XFRM_TUNNEL=m
-CONFIG_INET6_TUNNEL=m
-CONFIG_INET6_XFRM_MODE_TRANSPORT=m
-CONFIG_INET6_XFRM_MODE_TUNNEL=m
-CONFIG_INET6_XFRM_MODE_BEET=m
-CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=m
-CONFIG_IPV6_VTI=m
-CONFIG_IPV6_SIT=m
-CONFIG_IPV6_SIT_6RD=y
-CONFIG_IPV6_NDISC_NODETYPE=y
-CONFIG_IPV6_TUNNEL=m
-CONFIG_IPV6_GRE=m
-CONFIG_IPV6_MULTIPLE_TABLES=y
-# CONFIG_IPV6_SUBTREES is not set
-CONFIG_IPV6_MROUTE=y
-CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
-CONFIG_IPV6_PIMSM_V2=y
-# CONFIG_IPV6_SEG6_LWTUNNEL is not set
-# CONFIG_IPV6_SEG6_HMAC is not set
-CONFIG_NETLABEL=y
-CONFIG_NETWORK_SECMARK=y
-CONFIG_NET_PTP_CLASSIFY=y
-CONFIG_NETWORK_PHY_TIMESTAMPING=y
-CONFIG_NETFILTER=y
-CONFIG_NETFILTER_ADVANCED=y
-CONFIG_BRIDGE_NETFILTER=m
-
-#
-# Core Netfilter Configuration
-#
-CONFIG_NETFILTER_INGRESS=y
-CONFIG_NETFILTER_NETLINK=m
-CONFIG_NETFILTER_FAMILY_BRIDGE=y
-CONFIG_NETFILTER_FAMILY_ARP=y
-CONFIG_NETFILTER_NETLINK_ACCT=m
-CONFIG_NETFILTER_NETLINK_QUEUE=m
-CONFIG_NETFILTER_NETLINK_LOG=m
-CONFIG_NETFILTER_NETLINK_OSF=m
-CONFIG_NF_CONNTRACK=m
-CONFIG_NF_LOG_COMMON=m
-CONFIG_NF_LOG_NETDEV=m
-CONFIG_NETFILTER_CONNCOUNT=m
-CONFIG_NF_CONNTRACK_MARK=y
-CONFIG_NF_CONNTRACK_SECMARK=y
-CONFIG_NF_CONNTRACK_ZONES=y
-CONFIG_NF_CONNTRACK_PROCFS=y
-CONFIG_NF_CONNTRACK_EVENTS=y
-CONFIG_NF_CONNTRACK_TIMEOUT=y
-CONFIG_NF_CONNTRACK_TIMESTAMP=y
-CONFIG_NF_CONNTRACK_LABELS=y
-CONFIG_NF_CT_PROTO_DCCP=y
-CONFIG_NF_CT_PROTO_GRE=m
-CONFIG_NF_CT_PROTO_SCTP=y
-CONFIG_NF_CT_PROTO_UDPLITE=y
-CONFIG_NF_CONNTRACK_AMANDA=m
-CONFIG_NF_CONNTRACK_FTP=m
-CONFIG_NF_CONNTRACK_H323=m
-CONFIG_NF_CONNTRACK_IRC=m
-CONFIG_NF_CONNTRACK_BROADCAST=m
-CONFIG_NF_CONNTRACK_NETBIOS_NS=m
-CONFIG_NF_CONNTRACK_SNMP=m
-CONFIG_NF_CONNTRACK_PPTP=m
-CONFIG_NF_CONNTRACK_SANE=m
-CONFIG_NF_CONNTRACK_SIP=m
-CONFIG_NF_CONNTRACK_TFTP=m
-CONFIG_NF_CT_NETLINK=m
-CONFIG_NF_CT_NETLINK_TIMEOUT=m
-CONFIG_NF_CT_NETLINK_HELPER=m
-CONFIG_NETFILTER_NETLINK_GLUE_CT=y
-CONFIG_NF_NAT=m
-CONFIG_NF_NAT_NEEDED=y
-CONFIG_NF_NAT_PROTO_DCCP=y
-CONFIG_NF_NAT_PROTO_UDPLITE=y
-CONFIG_NF_NAT_PROTO_SCTP=y
-CONFIG_NF_NAT_AMANDA=m
-CONFIG_NF_NAT_FTP=m
-CONFIG_NF_NAT_IRC=m
-CONFIG_NF_NAT_SIP=m
-CONFIG_NF_NAT_TFTP=m
-CONFIG_NF_NAT_REDIRECT=y
-CONFIG_NETFILTER_SYNPROXY=m
-CONFIG_NF_TABLES=m
-CONFIG_NF_TABLES_SET=m
-CONFIG_NF_TABLES_INET=y
-CONFIG_NF_TABLES_NETDEV=y
-CONFIG_NFT_NUMGEN=m
-CONFIG_NFT_CT=m
-CONFIG_NFT_COUNTER=m
-# CONFIG_NFT_CONNLIMIT is not set
-CONFIG_NFT_LOG=m
-CONFIG_NFT_LIMIT=m
-CONFIG_NFT_MASQ=m
-CONFIG_NFT_REDIR=m
-CONFIG_NFT_NAT=m
-# CONFIG_NFT_TUNNEL is not set
-CONFIG_NFT_OBJREF=m
-CONFIG_NFT_QUEUE=m
-CONFIG_NFT_QUOTA=m
-CONFIG_NFT_REJECT=m
-CONFIG_NFT_REJECT_INET=m
-CONFIG_NFT_COMPAT=m
-CONFIG_NFT_HASH=m
-CONFIG_NFT_FIB=m
-CONFIG_NFT_FIB_INET=m
-# CONFIG_NFT_SOCKET is not set
-# CONFIG_NFT_OSF is not set
-# CONFIG_NFT_TPROXY is not set
-CONFIG_NF_DUP_NETDEV=m
-CONFIG_NFT_DUP_NETDEV=m
-CONFIG_NFT_FWD_NETDEV=m
-CONFIG_NFT_FIB_NETDEV=m
-# CONFIG_NF_FLOW_TABLE is not set
-CONFIG_NETFILTER_XTABLES=y
-
-#
-# Xtables combined modules
-#
-CONFIG_NETFILTER_XT_MARK=m
-CONFIG_NETFILTER_XT_CONNMARK=m
-CONFIG_NETFILTER_XT_SET=m
-
-#
-# Xtables targets
-#
-CONFIG_NETFILTER_XT_TARGET_AUDIT=m
-CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
-CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
-CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
-CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
-CONFIG_NETFILTER_XT_TARGET_CT=m
-CONFIG_NETFILTER_XT_TARGET_DSCP=m
-CONFIG_NETFILTER_XT_TARGET_HL=m
-CONFIG_NETFILTER_XT_TARGET_HMARK=m
-CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
-CONFIG_NETFILTER_XT_TARGET_LED=m
-CONFIG_NETFILTER_XT_TARGET_LOG=m
-CONFIG_NETFILTER_XT_TARGET_MARK=m
-CONFIG_NETFILTER_XT_NAT=m
-CONFIG_NETFILTER_XT_TARGET_NETMAP=m
-CONFIG_NETFILTER_XT_TARGET_NFLOG=m
-CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
-CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
-CONFIG_NETFILTER_XT_TARGET_RATEEST=m
-CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
-CONFIG_NETFILTER_XT_TARGET_TEE=m
-CONFIG_NETFILTER_XT_TARGET_TPROXY=m
-CONFIG_NETFILTER_XT_TARGET_TRACE=m
-CONFIG_NETFILTER_XT_TARGET_SECMARK=m
-CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
-CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
-
-#
-# Xtables matches
-#
-CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
-CONFIG_NETFILTER_XT_MATCH_BPF=m
-CONFIG_NETFILTER_XT_MATCH_CGROUP=m
-CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
-CONFIG_NETFILTER_XT_MATCH_COMMENT=m
-CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
-CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
-CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
-CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
-CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
-CONFIG_NETFILTER_XT_MATCH_CPU=m
-CONFIG_NETFILTER_XT_MATCH_DCCP=m
-CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
-CONFIG_NETFILTER_XT_MATCH_DSCP=m
-CONFIG_NETFILTER_XT_MATCH_ECN=m
-CONFIG_NETFILTER_XT_MATCH_ESP=m
-CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
-CONFIG_NETFILTER_XT_MATCH_HELPER=m
-CONFIG_NETFILTER_XT_MATCH_HL=m
-CONFIG_NETFILTER_XT_MATCH_IPCOMP=m
-CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
-CONFIG_NETFILTER_XT_MATCH_IPVS=m
-CONFIG_NETFILTER_XT_MATCH_L2TP=m
-CONFIG_NETFILTER_XT_MATCH_LENGTH=m
-CONFIG_NETFILTER_XT_MATCH_LIMIT=m
-CONFIG_NETFILTER_XT_MATCH_MAC=m
-CONFIG_NETFILTER_XT_MATCH_MARK=m
-CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
-CONFIG_NETFILTER_XT_MATCH_NFACCT=m
-CONFIG_NETFILTER_XT_MATCH_OSF=m
-CONFIG_NETFILTER_XT_MATCH_OWNER=m
-CONFIG_NETFILTER_XT_MATCH_POLICY=m
-CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
-CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
-CONFIG_NETFILTER_XT_MATCH_QUOTA=m
-CONFIG_NETFILTER_XT_MATCH_RATEEST=m
-CONFIG_NETFILTER_XT_MATCH_REALM=m
-CONFIG_NETFILTER_XT_MATCH_RECENT=m
-CONFIG_NETFILTER_XT_MATCH_SCTP=m
-CONFIG_NETFILTER_XT_MATCH_SOCKET=m
-CONFIG_NETFILTER_XT_MATCH_STATE=m
-CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
-CONFIG_NETFILTER_XT_MATCH_STRING=m
-CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
-CONFIG_NETFILTER_XT_MATCH_TIME=m
-CONFIG_NETFILTER_XT_MATCH_U32=m
-CONFIG_IP_SET=m
-CONFIG_IP_SET_MAX=256
-CONFIG_IP_SET_BITMAP_IP=m
-CONFIG_IP_SET_BITMAP_IPMAC=m
-CONFIG_IP_SET_BITMAP_PORT=m
-CONFIG_IP_SET_HASH_IP=m
-CONFIG_IP_SET_HASH_IPMARK=m
-CONFIG_IP_SET_HASH_IPPORT=m
-CONFIG_IP_SET_HASH_IPPORTIP=m
-CONFIG_IP_SET_HASH_IPPORTNET=m
-# CONFIG_IP_SET_HASH_IPMAC is not set
-CONFIG_IP_SET_HASH_MAC=m
-CONFIG_IP_SET_HASH_NETPORTNET=m
-CONFIG_IP_SET_HASH_NET=m
-CONFIG_IP_SET_HASH_NETNET=m
-CONFIG_IP_SET_HASH_NETPORT=m
-CONFIG_IP_SET_HASH_NETIFACE=m
-CONFIG_IP_SET_LIST_SET=m
-CONFIG_IP_VS=m
-CONFIG_IP_VS_IPV6=y
-# CONFIG_IP_VS_DEBUG is not set
-CONFIG_IP_VS_TAB_BITS=12
-
-#
-# IPVS transport protocol load balancing support
-#
-CONFIG_IP_VS_PROTO_TCP=y
-CONFIG_IP_VS_PROTO_UDP=y
-CONFIG_IP_VS_PROTO_AH_ESP=y
-CONFIG_IP_VS_PROTO_ESP=y
-CONFIG_IP_VS_PROTO_AH=y
-CONFIG_IP_VS_PROTO_SCTP=y
-
-#
-# IPVS scheduler
-#
-CONFIG_IP_VS_RR=m
-CONFIG_IP_VS_WRR=m
-CONFIG_IP_VS_LC=m
-CONFIG_IP_VS_WLC=m
-CONFIG_IP_VS_FO=m
-CONFIG_IP_VS_OVF=m
-CONFIG_IP_VS_LBLC=m
-CONFIG_IP_VS_LBLCR=m
-CONFIG_IP_VS_DH=m
-CONFIG_IP_VS_SH=m
-# CONFIG_IP_VS_MH is not set
-CONFIG_IP_VS_SED=m
-CONFIG_IP_VS_NQ=m
-
-#
-# IPVS SH scheduler
-#
-CONFIG_IP_VS_SH_TAB_BITS=8
-
-#
-# IPVS MH scheduler
-#
-CONFIG_IP_VS_MH_TAB_INDEX=12
-
-#
-# IPVS application helper
-#
-CONFIG_IP_VS_FTP=m
-CONFIG_IP_VS_NFCT=y
-CONFIG_IP_VS_PE_SIP=m
-
-#
-# IP: Netfilter Configuration
-#
-CONFIG_NF_DEFRAG_IPV4=m
-CONFIG_NF_SOCKET_IPV4=m
-CONFIG_NF_TPROXY_IPV4=m
-CONFIG_NF_TABLES_IPV4=y
-CONFIG_NFT_CHAIN_ROUTE_IPV4=m
-CONFIG_NFT_REJECT_IPV4=m
-CONFIG_NFT_DUP_IPV4=m
-CONFIG_NFT_FIB_IPV4=m
-CONFIG_NF_TABLES_ARP=y
-CONFIG_NF_DUP_IPV4=m
-CONFIG_NF_LOG_ARP=m
-CONFIG_NF_LOG_IPV4=m
-CONFIG_NF_REJECT_IPV4=m
-CONFIG_NF_NAT_IPV4=m
-CONFIG_NF_NAT_MASQUERADE_IPV4=y
-CONFIG_NFT_CHAIN_NAT_IPV4=m
-CONFIG_NFT_MASQ_IPV4=m
-CONFIG_NFT_REDIR_IPV4=m
-CONFIG_NF_NAT_SNMP_BASIC=m
-CONFIG_NF_NAT_PROTO_GRE=m
-CONFIG_NF_NAT_PPTP=m
-CONFIG_NF_NAT_H323=m
-CONFIG_IP_NF_IPTABLES=m
-CONFIG_IP_NF_MATCH_AH=m
-CONFIG_IP_NF_MATCH_ECN=m
-CONFIG_IP_NF_MATCH_RPFILTER=m
-CONFIG_IP_NF_MATCH_TTL=m
-CONFIG_IP_NF_FILTER=m
-CONFIG_IP_NF_TARGET_REJECT=m
-CONFIG_IP_NF_TARGET_SYNPROXY=m
-CONFIG_IP_NF_NAT=m
-CONFIG_IP_NF_TARGET_MASQUERADE=m
-CONFIG_IP_NF_TARGET_NETMAP=m
-CONFIG_IP_NF_TARGET_REDIRECT=m
-CONFIG_IP_NF_MANGLE=m
-# CONFIG_IP_NF_TARGET_CLUSTERIP is not set
-CONFIG_IP_NF_TARGET_ECN=m
-CONFIG_IP_NF_TARGET_TTL=m
-CONFIG_IP_NF_RAW=m
-CONFIG_IP_NF_SECURITY=m
-CONFIG_IP_NF_ARPTABLES=m
-CONFIG_IP_NF_ARPFILTER=m
-CONFIG_IP_NF_ARP_MANGLE=m
-
-#
-# IPv6: Netfilter Configuration
-#
-CONFIG_NF_SOCKET_IPV6=m
-CONFIG_NF_TPROXY_IPV6=m
-CONFIG_NF_TABLES_IPV6=y
-CONFIG_NFT_CHAIN_ROUTE_IPV6=m
-CONFIG_NFT_CHAIN_NAT_IPV6=m
-CONFIG_NFT_MASQ_IPV6=m
-CONFIG_NFT_REDIR_IPV6=m
-CONFIG_NFT_REJECT_IPV6=m
-CONFIG_NFT_DUP_IPV6=m
-CONFIG_NFT_FIB_IPV6=m
-CONFIG_NF_DUP_IPV6=m
-CONFIG_NF_REJECT_IPV6=m
-CONFIG_NF_LOG_IPV6=m
-CONFIG_NF_NAT_IPV6=m
-CONFIG_NF_NAT_MASQUERADE_IPV6=y
-CONFIG_IP6_NF_IPTABLES=m
-CONFIG_IP6_NF_MATCH_AH=m
-CONFIG_IP6_NF_MATCH_EUI64=m
-CONFIG_IP6_NF_MATCH_FRAG=m
-CONFIG_IP6_NF_MATCH_OPTS=m
-CONFIG_IP6_NF_MATCH_HL=m
-CONFIG_IP6_NF_MATCH_IPV6HEADER=m
-CONFIG_IP6_NF_MATCH_MH=m
-CONFIG_IP6_NF_MATCH_RPFILTER=m
-CONFIG_IP6_NF_MATCH_RT=m
-# CONFIG_IP6_NF_MATCH_SRH is not set
-# CONFIG_IP6_NF_TARGET_HL is not set
-CONFIG_IP6_NF_FILTER=m
-CONFIG_IP6_NF_TARGET_REJECT=m
-CONFIG_IP6_NF_TARGET_SYNPROXY=m
-CONFIG_IP6_NF_MANGLE=m
-CONFIG_IP6_NF_RAW=m
-CONFIG_IP6_NF_SECURITY=m
-CONFIG_IP6_NF_NAT=m
-CONFIG_IP6_NF_TARGET_MASQUERADE=m
-CONFIG_IP6_NF_TARGET_NPT=m
-CONFIG_NF_DEFRAG_IPV6=m
-CONFIG_NF_TABLES_BRIDGE=y
-CONFIG_NFT_BRIDGE_REJECT=m
-CONFIG_NF_LOG_BRIDGE=m
-CONFIG_BRIDGE_NF_EBTABLES=m
-CONFIG_BRIDGE_EBT_BROUTE=m
-CONFIG_BRIDGE_EBT_T_FILTER=m
-CONFIG_BRIDGE_EBT_T_NAT=m
-CONFIG_BRIDGE_EBT_802_3=m
-CONFIG_BRIDGE_EBT_AMONG=m
-CONFIG_BRIDGE_EBT_ARP=m
-CONFIG_BRIDGE_EBT_IP=m
-CONFIG_BRIDGE_EBT_IP6=m
-CONFIG_BRIDGE_EBT_LIMIT=m
-CONFIG_BRIDGE_EBT_MARK=m
-CONFIG_BRIDGE_EBT_PKTTYPE=m
-CONFIG_BRIDGE_EBT_STP=m
-CONFIG_BRIDGE_EBT_VLAN=m
-CONFIG_BRIDGE_EBT_ARPREPLY=m
-CONFIG_BRIDGE_EBT_DNAT=m
-CONFIG_BRIDGE_EBT_MARK_T=m
-CONFIG_BRIDGE_EBT_REDIRECT=m
-CONFIG_BRIDGE_EBT_SNAT=m
-CONFIG_BRIDGE_EBT_LOG=m
-CONFIG_BRIDGE_EBT_NFLOG=m
-# CONFIG_BPFILTER is not set
-# CONFIG_IP_DCCP is not set
-CONFIG_IP_SCTP=m
-# CONFIG_SCTP_DBG_OBJCNT is not set
-# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
-CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
-# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
-CONFIG_SCTP_COOKIE_HMAC_MD5=y
-CONFIG_SCTP_COOKIE_HMAC_SHA1=y
-CONFIG_INET_SCTP_DIAG=m
-# CONFIG_RDS is not set
-CONFIG_TIPC=m
-# CONFIG_TIPC_MEDIA_IB is not set
-CONFIG_TIPC_MEDIA_UDP=y
-CONFIG_TIPC_DIAG=m
-CONFIG_ATM=m
-CONFIG_ATM_CLIP=m
-# CONFIG_ATM_CLIP_NO_ICMP is not set
-CONFIG_ATM_LANE=m
-# CONFIG_ATM_MPOA is not set
-CONFIG_ATM_BR2684=m
-# CONFIG_ATM_BR2684_IPFILTER is not set
-CONFIG_L2TP=m
-CONFIG_L2TP_DEBUGFS=m
-CONFIG_L2TP_V3=y
-CONFIG_L2TP_IP=m
-CONFIG_L2TP_ETH=m
-CONFIG_STP=m
-CONFIG_GARP=m
-CONFIG_MRP=m
-CONFIG_BRIDGE=m
-CONFIG_BRIDGE_IGMP_SNOOPING=y
-CONFIG_BRIDGE_VLAN_FILTERING=y
-CONFIG_HAVE_NET_DSA=y
-# CONFIG_NET_DSA is not set
-CONFIG_VLAN_8021Q=m
-CONFIG_VLAN_8021Q_GVRP=y
-CONFIG_VLAN_8021Q_MVRP=y
-# CONFIG_DECNET is not set
-CONFIG_LLC=m
-# CONFIG_LLC2 is not set
-# CONFIG_ATALK is not set
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-# CONFIG_PHONET is not set
-CONFIG_6LOWPAN=m
-# CONFIG_6LOWPAN_DEBUGFS is not set
-# CONFIG_6LOWPAN_NHC is not set
-CONFIG_IEEE802154=m
-# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
-CONFIG_IEEE802154_SOCKET=m
-# CONFIG_IEEE802154_6LOWPAN is not set
-CONFIG_MAC802154=m
-CONFIG_NET_SCHED=y
-
-#
-# Queueing/Scheduling
-#
-CONFIG_NET_SCH_CBQ=m
-CONFIG_NET_SCH_HTB=m
-CONFIG_NET_SCH_HFSC=m
-CONFIG_NET_SCH_ATM=m
-CONFIG_NET_SCH_PRIO=m
-CONFIG_NET_SCH_MULTIQ=m
-CONFIG_NET_SCH_RED=m
-CONFIG_NET_SCH_SFB=m
-CONFIG_NET_SCH_SFQ=m
-CONFIG_NET_SCH_TEQL=m
-CONFIG_NET_SCH_TBF=m
-# CONFIG_NET_SCH_CBS is not set
-# CONFIG_NET_SCH_ETF is not set
-CONFIG_NET_SCH_GRED=m
-CONFIG_NET_SCH_DSMARK=m
-CONFIG_NET_SCH_NETEM=m
-CONFIG_NET_SCH_DRR=m
-CONFIG_NET_SCH_MQPRIO=m
-# CONFIG_NET_SCH_SKBPRIO is not set
-CONFIG_NET_SCH_CHOKE=m
-CONFIG_NET_SCH_QFQ=m
-CONFIG_NET_SCH_CODEL=m
-CONFIG_NET_SCH_FQ_CODEL=m
-# CONFIG_NET_SCH_CAKE is not set
-CONFIG_NET_SCH_FQ=m
-CONFIG_NET_SCH_HHF=m
-CONFIG_NET_SCH_PIE=m
-CONFIG_NET_SCH_INGRESS=m
-CONFIG_NET_SCH_PLUG=m
-CONFIG_NET_SCH_DEFAULT=y
-# CONFIG_DEFAULT_FQ is not set
-# CONFIG_DEFAULT_CODEL is not set
-CONFIG_DEFAULT_FQ_CODEL=y
-# CONFIG_DEFAULT_SFQ is not set
-# CONFIG_DEFAULT_PFIFO_FAST is not set
-CONFIG_DEFAULT_NET_SCH="fq_codel"
-
-#
-# Classification
-#
-CONFIG_NET_CLS=y
-CONFIG_NET_CLS_BASIC=m
-CONFIG_NET_CLS_TCINDEX=m
-CONFIG_NET_CLS_ROUTE4=m
-CONFIG_NET_CLS_FW=m
-CONFIG_NET_CLS_U32=m
-CONFIG_CLS_U32_PERF=y
-CONFIG_CLS_U32_MARK=y
-CONFIG_NET_CLS_RSVP=m
-CONFIG_NET_CLS_RSVP6=m
-CONFIG_NET_CLS_FLOW=m
-CONFIG_NET_CLS_CGROUP=y
-CONFIG_NET_CLS_BPF=m
-CONFIG_NET_CLS_FLOWER=m
-CONFIG_NET_CLS_MATCHALL=m
-CONFIG_NET_EMATCH=y
-CONFIG_NET_EMATCH_STACK=32
-CONFIG_NET_EMATCH_CMP=m
-CONFIG_NET_EMATCH_NBYTE=m
-CONFIG_NET_EMATCH_U32=m
-CONFIG_NET_EMATCH_META=m
-CONFIG_NET_EMATCH_TEXT=m
-# CONFIG_NET_EMATCH_CANID is not set
-CONFIG_NET_EMATCH_IPSET=m
-# CONFIG_NET_EMATCH_IPT is not set
-CONFIG_NET_CLS_ACT=y
-CONFIG_NET_ACT_POLICE=m
-CONFIG_NET_ACT_GACT=m
-CONFIG_GACT_PROB=y
-CONFIG_NET_ACT_MIRRED=m
-CONFIG_NET_ACT_SAMPLE=m
-# CONFIG_NET_ACT_IPT is not set
-CONFIG_NET_ACT_NAT=m
-CONFIG_NET_ACT_PEDIT=m
-CONFIG_NET_ACT_SIMP=m
-CONFIG_NET_ACT_SKBEDIT=m
-CONFIG_NET_ACT_CSUM=m
-CONFIG_NET_ACT_VLAN=m
-CONFIG_NET_ACT_BPF=m
-# CONFIG_NET_ACT_CONNMARK is not set
-CONFIG_NET_ACT_SKBMOD=m
-# CONFIG_NET_ACT_IFE is not set
-CONFIG_NET_ACT_TUNNEL_KEY=m
-CONFIG_NET_CLS_IND=y
-CONFIG_NET_SCH_FIFO=y
-CONFIG_DCB=y
-CONFIG_DNS_RESOLVER=m
-# CONFIG_BATMAN_ADV is not set
-CONFIG_OPENVSWITCH=m
-CONFIG_OPENVSWITCH_GRE=m
-CONFIG_OPENVSWITCH_VXLAN=m
-CONFIG_OPENVSWITCH_GENEVE=m
-CONFIG_VSOCKETS=m
-CONFIG_VSOCKETS_DIAG=m
-CONFIG_VIRTIO_VSOCKETS=m
-CONFIG_VIRTIO_VSOCKETS_COMMON=m
-CONFIG_NETLINK_DIAG=m
-CONFIG_MPLS=y
-CONFIG_NET_MPLS_GSO=m
-# CONFIG_MPLS_ROUTING is not set
-CONFIG_NET_NSH=m
-# CONFIG_HSR is not set
-CONFIG_NET_SWITCHDEV=y
-CONFIG_NET_L3_MASTER_DEV=y
-# CONFIG_QRTR is not set
-# CONFIG_NET_NCSI is not set
-CONFIG_RPS=y
-CONFIG_RFS_ACCEL=y
-CONFIG_XPS=y
-CONFIG_CGROUP_NET_PRIO=y
-CONFIG_CGROUP_NET_CLASSID=y
-CONFIG_NET_RX_BUSY_POLL=y
-CONFIG_BQL=y
-CONFIG_BPF_JIT=y
-# CONFIG_BPF_STREAM_PARSER is not set
-CONFIG_NET_FLOW_LIMIT=y
-
-#
-# Network testing
-#
-CONFIG_NET_PKTGEN=m
-CONFIG_NET_DROP_MONITOR=m
-# CONFIG_HAMRADIO is not set
-CONFIG_CAN=m
-CONFIG_CAN_RAW=m
-CONFIG_CAN_BCM=m
-CONFIG_CAN_GW=m
-# CONFIG_CAN_J1939 is not set
-
-#
-# CAN Device Drivers
-#
-CONFIG_CAN_VCAN=m
-# CONFIG_CAN_VXCAN is not set
-CONFIG_CAN_SLCAN=m
-CONFIG_CAN_DEV=m
-CONFIG_CAN_CALC_BITTIMING=y
-# CONFIG_CAN_GRCAN is not set
-# CONFIG_CAN_XILINXCAN is not set
-CONFIG_CAN_C_CAN=m
-CONFIG_CAN_C_CAN_PLATFORM=m
-CONFIG_CAN_C_CAN_PCI=m
-CONFIG_CAN_CC770=m
-# CONFIG_CAN_CC770_ISA is not set
-CONFIG_CAN_CC770_PLATFORM=m
-# CONFIG_CAN_IFI_CANFD is not set
-# CONFIG_CAN_M_CAN is not set
-# CONFIG_CAN_PEAK_PCIEFD is not set
-CONFIG_CAN_SJA1000=m
-# CONFIG_CAN_SJA1000_ISA is not set
-CONFIG_CAN_SJA1000_PLATFORM=m
-CONFIG_CAN_EMS_PCI=m
-CONFIG_CAN_PEAK_PCI=m
-CONFIG_CAN_PEAK_PCIEC=y
-CONFIG_CAN_KVASER_PCI=m
-CONFIG_CAN_PLX_PCI=m
-CONFIG_CAN_SOFTING=m
-
-#
-# CAN SPI interfaces
-#
-# CONFIG_CAN_HI311X is not set
-# CONFIG_CAN_MCP251X is not set
-
-#
-# CAN USB interfaces
-#
-CONFIG_CAN_8DEV_USB=m
-CONFIG_CAN_EMS_USB=m
-CONFIG_CAN_ESD_USB2=m
-# CONFIG_CAN_GS_USB is not set
-CONFIG_CAN_KVASER_USB=m
-# CONFIG_CAN_MCBA_USB is not set
-CONFIG_CAN_PEAK_USB=m
-# CONFIG_CAN_UCAN is not set
-# CONFIG_CAN_DEBUG_DEVICES is not set
-# CONFIG_BT is not set
-# CONFIG_AF_RXRPC is not set
-# CONFIG_AF_KCM is not set
-CONFIG_STREAM_PARSER=m
-CONFIG_FIB_RULES=y
-CONFIG_WIRELESS=y
-CONFIG_WEXT_CORE=y
-CONFIG_WEXT_PROC=y
-CONFIG_CFG80211=m
-# CONFIG_NL80211_TESTMODE is not set
-# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
-# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
-CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
-CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
-CONFIG_CFG80211_DEFAULT_PS=y
-# CONFIG_CFG80211_DEBUGFS is not set
-CONFIG_CFG80211_CRDA_SUPPORT=y
-CONFIG_CFG80211_WEXT=y
-CONFIG_MAC80211=m
-CONFIG_MAC80211_HAS_RC=y
-CONFIG_MAC80211_RC_MINSTREL=y
-CONFIG_MAC80211_RC_MINSTREL_HT=y
-# CONFIG_MAC80211_RC_MINSTREL_VHT is not set
-CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
-CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
-# CONFIG_MAC80211_MESH is not set
-CONFIG_MAC80211_LEDS=y
-CONFIG_MAC80211_DEBUGFS=y
-# CONFIG_MAC80211_MESSAGE_TRACING is not set
-# CONFIG_MAC80211_DEBUG_MENU is not set
-CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
-# CONFIG_WIMAX is not set
-CONFIG_RFKILL=m
-CONFIG_RFKILL_LEDS=y
-CONFIG_RFKILL_INPUT=y
-CONFIG_RFKILL_GPIO=m
-# CONFIG_NET_9P is not set
-# CONFIG_CAIF is not set
-CONFIG_CEPH_LIB=m
-# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
-CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
-# CONFIG_NFC is not set
-CONFIG_PSAMPLE=m
-# CONFIG_NET_IFE is not set
-CONFIG_LWTUNNEL=y
-CONFIG_LWTUNNEL_BPF=y
-CONFIG_DST_CACHE=y
-CONFIG_GRO_CELLS=y
-CONFIG_SOCK_VALIDATE_XMIT=y
-CONFIG_NET_DEVLINK=m
-CONFIG_MAY_USE_DEVLINK=m
-CONFIG_PAGE_POOL=y
-CONFIG_FAILOVER=m
-CONFIG_HAVE_EBPF_JIT=y
-
-#
-# Device Drivers
-#
-CONFIG_ARM_AMBA=y
-
-#
-# Generic Driver Options
-#
-# CONFIG_UEVENT_HELPER is not set
-CONFIG_DEVTMPFS=y
-CONFIG_DEVTMPFS_MOUNT=y
-CONFIG_STANDALONE=y
-CONFIG_PREVENT_FIRMWARE_BUILD=y
-
-#
-# Firmware loader
-#
-CONFIG_FW_LOADER=y
-CONFIG_EXTRA_FIRMWARE=""
-# CONFIG_FW_LOADER_USER_HELPER is not set
-CONFIG_WANT_DEV_COREDUMP=y
-# CONFIG_ALLOW_DEV_COREDUMP is not set
-# CONFIG_DEBUG_DRIVER is not set
-# CONFIG_DEBUG_DEVRES is not set
-# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
-# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
-CONFIG_GENERIC_CPU_AUTOPROBE=y
-CONFIG_REGMAP=y
-CONFIG_REGMAP_I2C=m
-CONFIG_REGMAP_SPI=m
-CONFIG_REGMAP_MMIO=y
-CONFIG_DMA_SHARED_BUFFER=y
-# CONFIG_DMA_FENCE_TRACE is not set
-CONFIG_DMA_CMA=y
-
-#
-# Default contiguous memory area size:
-#
-CONFIG_CMA_SIZE_MBYTES=64
-CONFIG_CMA_SIZE_SEL_MBYTES=y
-# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
-# CONFIG_CMA_SIZE_SEL_MIN is not set
-# CONFIG_CMA_SIZE_SEL_MAX is not set
-CONFIG_CMA_ALIGNMENT=8
-CONFIG_GENERIC_ARCH_TOPOLOGY=y
-
-#
-# Bus devices
-#
-# CONFIG_BRCMSTB_GISB_ARB is not set
-CONFIG_HISILICON_LPC=y
-CONFIG_QCOM_EBI2=y
-# CONFIG_SIMPLE_PM_BUS is not set
-CONFIG_VEXPRESS_CONFIG=y
-CONFIG_CONNECTOR=y
-CONFIG_PROC_EVENTS=y
-# CONFIG_GNSS is not set
-CONFIG_MTD=m
-# CONFIG_MTD_TESTS is not set
-# CONFIG_MTD_REDBOOT_PARTS is not set
-CONFIG_MTD_CMDLINE_PARTS=m
-# CONFIG_MTD_AFS_PARTS is not set
-CONFIG_MTD_OF_PARTS=m
-# CONFIG_MTD_AR7_PARTS is not set
-
-#
-# Partition parsers
-#
-
-#
-# User Modules And Translation Layers
-#
-CONFIG_MTD_BLKDEVS=m
-CONFIG_MTD_BLOCK=m
-# CONFIG_MTD_BLOCK_RO is not set
-# CONFIG_FTL is not set
-# CONFIG_NFTL is not set
-# CONFIG_INFTL is not set
-# CONFIG_RFD_FTL is not set
-# CONFIG_SSFDC is not set
-# CONFIG_SM_FTL is not set
-# CONFIG_MTD_OOPS is not set
-# CONFIG_MTD_SWAP is not set
-# CONFIG_MTD_PARTITIONED_MASTER is not set
-
-#
-# RAM/ROM/Flash chip drivers
-#
-CONFIG_MTD_CFI=m
-# CONFIG_MTD_JEDECPROBE is not set
-CONFIG_MTD_GEN_PROBE=m
-CONFIG_MTD_CFI_ADV_OPTIONS=y
-CONFIG_MTD_CFI_NOSWAP=y
-# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
-# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set
-CONFIG_MTD_CFI_GEOMETRY=y
-CONFIG_MTD_MAP_BANK_WIDTH_1=y
-CONFIG_MTD_MAP_BANK_WIDTH_2=y
-CONFIG_MTD_MAP_BANK_WIDTH_4=y
-CONFIG_MTD_MAP_BANK_WIDTH_8=y
-# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
-# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
-CONFIG_MTD_CFI_I1=y
-CONFIG_MTD_CFI_I2=y
-# CONFIG_MTD_CFI_I4 is not set
-# CONFIG_MTD_CFI_I8 is not set
-# CONFIG_MTD_OTP is not set
-CONFIG_MTD_CFI_INTELEXT=m
-CONFIG_MTD_CFI_AMDSTD=m
-CONFIG_MTD_CFI_STAA=m
-CONFIG_MTD_CFI_UTIL=m
-# CONFIG_MTD_RAM is not set
-# CONFIG_MTD_ROM is not set
-# CONFIG_MTD_ABSENT is not set
-
-#
-# Mapping drivers for chip access
-#
-# CONFIG_MTD_COMPLEX_MAPPINGS is not set
-CONFIG_MTD_PHYSMAP=m
-# CONFIG_MTD_PHYSMAP_COMPAT is not set
-CONFIG_MTD_PHYSMAP_OF=m
-# CONFIG_MTD_PHYSMAP_OF_VERSATILE is not set
-# CONFIG_MTD_PHYSMAP_OF_GEMINI is not set
-# CONFIG_MTD_INTEL_VR_NOR is not set
-# CONFIG_MTD_PLATRAM is not set
-
-#
-# Self-contained MTD device drivers
-#
-# CONFIG_MTD_PMC551 is not set
-# CONFIG_MTD_DATAFLASH is not set
-# CONFIG_MTD_M25P80 is not set
-# CONFIG_MTD_MCHP23K256 is not set
-# CONFIG_MTD_SST25L is not set
-# CONFIG_MTD_SLRAM is not set
-# CONFIG_MTD_PHRAM is not set
-# CONFIG_MTD_MTDRAM is not set
-CONFIG_MTD_BLOCK2MTD=m
-
-#
-# Disk-On-Chip Device Drivers
-#
-# CONFIG_MTD_DOCG3 is not set
-# CONFIG_MTD_ONENAND is not set
-# CONFIG_MTD_NAND is not set
-# CONFIG_MTD_SPI_NAND is not set
-
-#
-# LPDDR & LPDDR2 PCM memory drivers
-#
-# CONFIG_MTD_LPDDR is not set
-CONFIG_MTD_SPI_NOR=m
-CONFIG_MTD_MT81xx_NOR=m
-CONFIG_MTD_SPI_NOR_USE_4K_SECTORS=y
-# CONFIG_SPI_CADENCE_QUADSPI is not set
-CONFIG_SPI_HISI_SFC=m
-CONFIG_MTD_UBI=m
-CONFIG_MTD_UBI_WL_THRESHOLD=4096
-CONFIG_MTD_UBI_BEB_LIMIT=20
-# CONFIG_MTD_UBI_FASTMAP is not set
-CONFIG_MTD_UBI_GLUEBI=m
-# CONFIG_MTD_UBI_BLOCK is not set
-CONFIG_DTC=y
-CONFIG_OF=y
-# CONFIG_OF_UNITTEST is not set
-CONFIG_OF_FLATTREE=y
-CONFIG_OF_EARLY_FLATTREE=y
-CONFIG_OF_KOBJ=y
-CONFIG_OF_DYNAMIC=y
-CONFIG_OF_ADDRESS=y
-CONFIG_OF_IRQ=y
-CONFIG_OF_NET=y
-CONFIG_OF_MDIO=y
-CONFIG_OF_RESERVED_MEM=y
-CONFIG_OF_RESOLVE=y
-CONFIG_OF_OVERLAY=y
-CONFIG_OF_NUMA=y
-# CONFIG_PARPORT is not set
-CONFIG_PNP=y
-CONFIG_PNP_DEBUG_MESSAGES=y
-
-#
-# Protocols
-#
-CONFIG_PNPACPI=y
-CONFIG_BLK_DEV=y
-CONFIG_BLK_DEV_NULL_BLK=m
-CONFIG_CDROM=m
-# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
-CONFIG_ZRAM=m
-# CONFIG_ZRAM_WRITEBACK is not set
-# CONFIG_ZRAM_MEMORY_TRACKING is not set
-# CONFIG_BLK_DEV_DAC960 is not set
-# CONFIG_BLK_DEV_UMEM is not set
-CONFIG_BLK_DEV_LOOP=m
-CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
-# CONFIG_BLK_DEV_CRYPTOLOOP is not set
-# CONFIG_BLK_DEV_DRBD is not set
-CONFIG_BLK_DEV_NBD=m
-# CONFIG_BLK_DEV_SKD is not set
-# CONFIG_BLK_DEV_SX8 is not set
-CONFIG_BLK_DEV_RAM=m
-CONFIG_BLK_DEV_RAM_COUNT=16
-CONFIG_BLK_DEV_RAM_SIZE=16384
-CONFIG_CDROM_PKTCDVD=m
-CONFIG_CDROM_PKTCDVD_BUFFERS=8
-# CONFIG_CDROM_PKTCDVD_WCACHE is not set
-# CONFIG_ATA_OVER_ETH is not set
-CONFIG_VIRTIO_BLK=m
-CONFIG_VIRTIO_BLK_SCSI=y
-CONFIG_BLK_DEV_RBD=m
-# CONFIG_BLK_DEV_RSXX is not set
-
-#
-# NVME Support
-#
-CONFIG_NVME_CORE=m
-CONFIG_BLK_DEV_NVME=m
-# CONFIG_NVME_MULTIPATH is not set
-CONFIG_NVME_FABRICS=m
-CONFIG_NVME_RDMA=m
-CONFIG_NVME_FC=m
-CONFIG_NVME_TARGET=m
-CONFIG_NVME_TARGET_LOOP=m
-CONFIG_NVME_TARGET_RDMA=m
-CONFIG_NVME_TARGET_FC=m
-CONFIG_NVME_TARGET_FCLOOP=m
-
-#
-# Misc devices
-#
-CONFIG_SENSORS_LIS3LV02D=m
-# CONFIG_AD525X_DPOT is not set
-# CONFIG_DUMMY_IRQ is not set
-# CONFIG_PHANTOM is not set
-# CONFIG_SGI_IOC4 is not set
-CONFIG_TIFM_CORE=m
-CONFIG_TIFM_7XX1=m
-# CONFIG_ICS932S401 is not set
-CONFIG_ENCLOSURE_SERVICES=m
-# CONFIG_HP_ILO is not set
-CONFIG_APDS9802ALS=m
-CONFIG_ISL29003=m
-CONFIG_ISL29020=m
-CONFIG_SENSORS_TSL2550=m
-CONFIG_SENSORS_BH1770=m
-CONFIG_SENSORS_APDS990X=m
-# CONFIG_HMC6352 is not set
-# CONFIG_DS1682 is not set
-# CONFIG_USB_SWITCH_FSA9480 is not set
-# CONFIG_LATTICE_ECP3_CONFIG is not set
-# CONFIG_SRAM is not set
-CONFIG_VEXPRESS_SYSCFG=y
-# CONFIG_PCI_ENDPOINT_TEST is not set
-# CONFIG_C2PORT is not set
-
-#
-# EEPROM support
-#
-# CONFIG_EEPROM_AT24 is not set
-# CONFIG_EEPROM_AT25 is not set
-CONFIG_EEPROM_LEGACY=m
-CONFIG_EEPROM_MAX6875=m
-CONFIG_EEPROM_93CX6=m
-# CONFIG_EEPROM_93XX46 is not set
-# CONFIG_EEPROM_IDT_89HPESX is not set
-CONFIG_CB710_CORE=m
-# CONFIG_CB710_DEBUG is not set
-CONFIG_CB710_DEBUG_ASSUMPTIONS=y
-
-#
-# Texas Instruments shared transport line discipline
-#
-# CONFIG_TI_ST is not set
-CONFIG_SENSORS_LIS3_I2C=m
-
-#
-# Altera FPGA firmware download module (requires I2C)
-#
-CONFIG_ALTERA_STAPL=m
-
-#
-# Intel MIC & related support
-#
-
-#
-# Intel MIC Bus Driver
-#
-
-#
-# SCIF Bus Driver
-#
-
-#
-# VOP Bus Driver
-#
-
-#
-# Intel MIC Host Driver
-#
-
-#
-# Intel MIC Card Driver
-#
-
-#
-# SCIF Driver
-#
-
-#
-# Intel MIC Coprocessor State Management (COSM) Drivers
-#
-
-#
-# VOP Driver
-#
-# CONFIG_GENWQE is not set
-# CONFIG_ECHO is not set
-# CONFIG_MISC_RTSX_PCI is not set
-# CONFIG_MISC_RTSX_USB is not set
-
-#
-# SCSI device support
-#
-CONFIG_SCSI_MOD=y
-CONFIG_RAID_ATTRS=m
-CONFIG_SCSI=y
-CONFIG_SCSI_DMA=y
-CONFIG_SCSI_NETLINK=y
-CONFIG_SCSI_MQ_DEFAULT=y
-CONFIG_SCSI_PROC_FS=y
-
-#
-# SCSI support type (disk, tape, CD-ROM)
-#
-CONFIG_BLK_DEV_SD=m
-CONFIG_CHR_DEV_ST=m
-# CONFIG_CHR_DEV_OSST is not set
-CONFIG_BLK_DEV_SR=m
-CONFIG_BLK_DEV_SR_VENDOR=y
-CONFIG_CHR_DEV_SG=m
-CONFIG_CHR_DEV_SCH=m
-CONFIG_SCSI_ENCLOSURE=m
-CONFIG_SCSI_CONSTANTS=y
-CONFIG_SCSI_LOGGING=y
-CONFIG_SCSI_SCAN_ASYNC=y
-
-#
-# SCSI Transports
-#
-CONFIG_SCSI_SPI_ATTRS=m
-CONFIG_SCSI_FC_ATTRS=m
-CONFIG_SCSI_ISCSI_ATTRS=m
-CONFIG_SCSI_SAS_ATTRS=m
-CONFIG_SCSI_SAS_LIBSAS=m
-CONFIG_SCSI_SAS_ATA=y
-CONFIG_SCSI_SAS_HOST_SMP=y
-CONFIG_SCSI_SRP_ATTRS=m
-CONFIG_SCSI_LOWLEVEL=y
-CONFIG_ISCSI_TCP=m
-CONFIG_ISCSI_BOOT_SYSFS=m
-# CONFIG_SCSI_CXGB3_ISCSI is not set
-CONFIG_SCSI_CXGB4_ISCSI=m
-CONFIG_SCSI_BNX2_ISCSI=m
-CONFIG_SCSI_BNX2X_FCOE=m
-CONFIG_BE2ISCSI=m
-# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
-CONFIG_SCSI_HPSA=m
-# CONFIG_SCSI_3W_9XXX is not set
-# CONFIG_SCSI_3W_SAS is not set
-# CONFIG_SCSI_ACARD is not set
-CONFIG_SCSI_AACRAID=m
-# CONFIG_SCSI_AIC7XXX is not set
-# CONFIG_SCSI_AIC79XX is not set
-# CONFIG_SCSI_AIC94XX is not set
-CONFIG_SCSI_HISI_SAS=m
-CONFIG_SCSI_HISI_SAS_PCI=m
-# CONFIG_SCSI_MVSAS is not set
-# CONFIG_SCSI_MVUMI is not set
-# CONFIG_SCSI_ADVANSYS is not set
-# CONFIG_SCSI_ARCMSR is not set
-# CONFIG_SCSI_ESAS2R is not set
-# CONFIG_MEGARAID_NEWGEN is not set
-# CONFIG_MEGARAID_LEGACY is not set
-CONFIG_MEGARAID_SAS=m
-CONFIG_SCSI_MPT3SAS=m
-CONFIG_SCSI_MPT2SAS_MAX_SGE=128
-CONFIG_SCSI_MPT3SAS_MAX_SGE=128
-CONFIG_SCSI_MPT2SAS=m
-CONFIG_SCSI_SMARTPQI=m
-# CONFIG_SCSI_UFSHCD is not set
-# CONFIG_SCSI_HPTIOP is not set
-CONFIG_LIBFC=m
-CONFIG_LIBFCOE=m
-# CONFIG_FCOE is not set
-# CONFIG_SCSI_SNIC is not set
-# CONFIG_SCSI_DMX3191D is not set
-# CONFIG_SCSI_IPS is not set
-# CONFIG_SCSI_INITIO is not set
-# CONFIG_SCSI_INIA100 is not set
-# CONFIG_SCSI_STEX is not set
-# CONFIG_SCSI_SYM53C8XX_2 is not set
-CONFIG_SCSI_IPR=m
-CONFIG_SCSI_IPR_TRACE=y
-CONFIG_SCSI_IPR_DUMP=y
-# CONFIG_SCSI_QLOGIC_1280 is not set
-CONFIG_SCSI_QLA_FC=m
-# CONFIG_TCM_QLA2XXX is not set
-CONFIG_SCSI_QLA_ISCSI=m
-CONFIG_QEDI=m
-CONFIG_QEDF=m
-# CONFIG_SCSI_HUAWEI_FC is not set
-CONFIG_SCSI_LPFC=m
-# CONFIG_SCSI_LPFC_DEBUG_FS is not set
-# CONFIG_SCSI_DC395x is not set
-# CONFIG_SCSI_AM53C974 is not set
-# CONFIG_SCSI_WD719X is not set
-CONFIG_SCSI_DEBUG=m
-# CONFIG_SCSI_PMCRAID is not set
-# CONFIG_SCSI_PM8001 is not set
-# CONFIG_SCSI_BFA_FC is not set
-CONFIG_SCSI_VIRTIO=m
-CONFIG_SCSI_CHELSIO_FCOE=m
-# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
-CONFIG_SCSI_DH=y
-CONFIG_SCSI_DH_RDAC=y
-CONFIG_SCSI_DH_HP_SW=y
-CONFIG_SCSI_DH_EMC=y
-CONFIG_SCSI_DH_ALUA=y
-# CONFIG_SCSI_OSD_INITIATOR is not set
-CONFIG_HAVE_PATA_PLATFORM=y
-CONFIG_ATA=m
-CONFIG_ATA_VERBOSE_ERROR=y
-CONFIG_ATA_ACPI=y
-# CONFIG_SATA_ZPODD is not set
-CONFIG_SATA_PMP=y
-
-#
-# Controllers with non-SFF native interface
-#
-CONFIG_SATA_AHCI=m
-CONFIG_SATA_MOBILE_LPM_POLICY=0
-CONFIG_SATA_AHCI_PLATFORM=m
-# CONFIG_AHCI_CEVA is not set
-CONFIG_AHCI_XGENE=m
-# CONFIG_AHCI_QORIQ is not set
-CONFIG_SATA_AHCI_SEATTLE=m
-# CONFIG_SATA_INIC162X is not set
-# CONFIG_SATA_ACARD_AHCI is not set
-# CONFIG_SATA_SIL24 is not set
-CONFIG_ATA_SFF=y
-
-#
-# SFF controllers with custom DMA interface
-#
-# CONFIG_PDC_ADMA is not set
-# CONFIG_SATA_QSTOR is not set
-# CONFIG_SATA_SX4 is not set
-CONFIG_ATA_BMDMA=y
-
-#
-# SATA SFF controllers with BMDMA
-#
-CONFIG_ATA_PIIX=m
-# CONFIG_SATA_DWC is not set
-# CONFIG_SATA_MV is not set
-# CONFIG_SATA_NV is not set
-# CONFIG_SATA_PROMISE is not set
-# CONFIG_SATA_SIL is not set
-# CONFIG_SATA_SIS is not set
-# CONFIG_SATA_SVW is not set
-# CONFIG_SATA_ULI is not set
-# CONFIG_SATA_VIA is not set
-# CONFIG_SATA_VITESSE is not set
-
-#
-# PATA SFF controllers with BMDMA
-#
-# CONFIG_PATA_ALI is not set
-# CONFIG_PATA_AMD is not set
-# CONFIG_PATA_ARTOP is not set
-# CONFIG_PATA_ATIIXP is not set
-# CONFIG_PATA_ATP867X is not set
-# CONFIG_PATA_CMD64X is not set
-# CONFIG_PATA_CYPRESS is not set
-# CONFIG_PATA_EFAR is not set
-# CONFIG_PATA_HPT366 is not set
-# CONFIG_PATA_HPT37X is not set
-# CONFIG_PATA_HPT3X2N is not set
-# CONFIG_PATA_HPT3X3 is not set
-# CONFIG_PATA_IT8213 is not set
-# CONFIG_PATA_IT821X is not set
-# CONFIG_PATA_JMICRON is not set
-# CONFIG_PATA_MARVELL is not set
-# CONFIG_PATA_NETCELL is not set
-# CONFIG_PATA_NINJA32 is not set
-# CONFIG_PATA_NS87415 is not set
-# CONFIG_PATA_OLDPIIX is not set
-# CONFIG_PATA_OPTIDMA is not set
-# CONFIG_PATA_PDC2027X is not set
-# CONFIG_PATA_PDC_OLD is not set
-# CONFIG_PATA_RADISYS is not set
-# CONFIG_PATA_RDC is not set
-# CONFIG_PATA_SCH is not set
-# CONFIG_PATA_SERVERWORKS is not set
-# CONFIG_PATA_SIL680 is not set
-# CONFIG_PATA_SIS is not set
-# CONFIG_PATA_TOSHIBA is not set
-# CONFIG_PATA_TRIFLEX is not set
-# CONFIG_PATA_VIA is not set
-# CONFIG_PATA_WINBOND is not set
-
-#
-# PIO-only SFF controllers
-#
-# CONFIG_PATA_CMD640_PCI is not set
-# CONFIG_PATA_MPIIX is not set
-# CONFIG_PATA_NS87410 is not set
-# CONFIG_PATA_OPTI is not set
-# CONFIG_PATA_PLATFORM is not set
-# CONFIG_PATA_RZ1000 is not set
-
-#
-# Generic fallback / legacy drivers
-#
-# CONFIG_PATA_ACPI is not set
-CONFIG_ATA_GENERIC=m
-# CONFIG_PATA_LEGACY is not set
-CONFIG_MD=y
-CONFIG_BLK_DEV_MD=y
-CONFIG_MD_AUTODETECT=y
-CONFIG_MD_LINEAR=m
-CONFIG_MD_RAID0=m
-CONFIG_MD_RAID1=m
-CONFIG_MD_RAID10=m
-CONFIG_MD_RAID456=m
-# CONFIG_MD_MULTIPATH is not set
-CONFIG_MD_FAULTY=m
-# CONFIG_BCACHE is not set
-CONFIG_BLK_DEV_DM_BUILTIN=y
-CONFIG_BLK_DEV_DM=m
-# CONFIG_DM_MQ_DEFAULT is not set
-CONFIG_DM_DEBUG=y
-CONFIG_DM_BUFIO=m
-# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
-CONFIG_DM_BIO_PRISON=m
-CONFIG_DM_PERSISTENT_DATA=m
-# CONFIG_DM_UNSTRIPED is not set
-CONFIG_DM_CRYPT=m
-CONFIG_DM_SNAPSHOT=m
-CONFIG_DM_THIN_PROVISIONING=m
-CONFIG_DM_CACHE=m
-CONFIG_DM_CACHE_SMQ=m
-# CONFIG_DM_WRITECACHE is not set
-CONFIG_DM_ERA=m
-CONFIG_DM_MIRROR=m
-CONFIG_DM_LOG_USERSPACE=m
-CONFIG_DM_RAID=m
-CONFIG_DM_ZERO=m
-CONFIG_DM_MULTIPATH=m
-CONFIG_DM_MULTIPATH_QL=m
-CONFIG_DM_MULTIPATH_ST=m
-CONFIG_DM_DELAY=m
-CONFIG_DM_UEVENT=y
-CONFIG_DM_FLAKEY=m
-CONFIG_DM_VERITY=m
-# CONFIG_DM_VERITY_FEC is not set
-CONFIG_DM_SWITCH=m
-CONFIG_DM_LOG_WRITES=m
-CONFIG_DM_INTEGRITY=m
-# CONFIG_DM_ZONED is not set
-CONFIG_TARGET_CORE=m
-CONFIG_TCM_IBLOCK=m
-CONFIG_TCM_FILEIO=m
-CONFIG_TCM_PSCSI=m
-CONFIG_TCM_USER2=m
-CONFIG_LOOPBACK_TARGET=m
-CONFIG_TCM_FC=m
-CONFIG_ISCSI_TARGET=m
-CONFIG_ISCSI_TARGET_CXGB4=m
-# CONFIG_FUSION is not set
-
-#
-# IEEE 1394 (FireWire) support
-#
-# CONFIG_FIREWIRE is not set
-# CONFIG_FIREWIRE_NOSY is not set
-CONFIG_NETDEVICES=y
-CONFIG_MII=m
-CONFIG_NET_CORE=y
-CONFIG_BONDING=m
-CONFIG_DUMMY=m
-# CONFIG_EQUALIZER is not set
-CONFIG_NET_FC=y
-CONFIG_IFB=m
-CONFIG_NET_TEAM=m
-CONFIG_NET_TEAM_MODE_BROADCAST=m
-CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
-CONFIG_NET_TEAM_MODE_RANDOM=m
-CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
-CONFIG_NET_TEAM_MODE_LOADBALANCE=m
-CONFIG_MACVLAN=m
-CONFIG_MACVTAP=m
-CONFIG_IPVLAN=m
-CONFIG_IPVTAP=m
-CONFIG_VXLAN=m
-CONFIG_GENEVE=m
-# CONFIG_GTP is not set
-CONFIG_MACSEC=m
-CONFIG_NETCONSOLE=m
-CONFIG_NETCONSOLE_DYNAMIC=y
-CONFIG_NETPOLL=y
-CONFIG_NET_POLL_CONTROLLER=y
-CONFIG_TUN=m
-CONFIG_TAP=m
-# CONFIG_TUN_VNET_CROSS_LE is not set
-CONFIG_VETH=m
-CONFIG_VIRTIO_NET=m
-CONFIG_NLMON=m
-CONFIG_NET_VRF=m
-CONFIG_VSOCKMON=m
-# CONFIG_ARCNET is not set
-# CONFIG_ATM_DRIVERS is not set
-
-#
-# CAIF transport drivers
-#
-
-#
-# Distributed Switch Architecture drivers
-#
-CONFIG_ETHERNET=y
-CONFIG_MDIO=m
-# CONFIG_NET_VENDOR_3COM is not set
-# CONFIG_NET_VENDOR_ADAPTEC is not set
-# CONFIG_NET_VENDOR_AGERE is not set
-CONFIG_NET_VENDOR_ALACRITECH=y
-# CONFIG_SLICOSS is not set
-# CONFIG_NET_VENDOR_ALTEON is not set
-# CONFIG_ALTERA_TSE is not set
-CONFIG_NET_VENDOR_AMAZON=y
-CONFIG_NET_VENDOR_AMD=y
-# CONFIG_AMD8111_ETH is not set
-# CONFIG_PCNET32 is not set
-CONFIG_AMD_XGBE=m
-# CONFIG_AMD_XGBE_DCB is not set
-CONFIG_NET_XGENE=y
-CONFIG_NET_XGENE_V2=m
-CONFIG_NET_VENDOR_AQUANTIA=y
-CONFIG_NET_VENDOR_ARC=y
-CONFIG_NET_VENDOR_ATHEROS=y
-# CONFIG_ATL2 is not set
-CONFIG_ATL1=m
-CONFIG_ATL1E=m
-CONFIG_ATL1C=m
-CONFIG_ALX=m
-# CONFIG_NET_VENDOR_AURORA is not set
-CONFIG_NET_VENDOR_BROADCOM=y
-# CONFIG_B44 is not set
-# CONFIG_BCMGENET is not set
-CONFIG_BNX2=m
-CONFIG_CNIC=m
-CONFIG_TIGON3=m
-CONFIG_TIGON3_HWMON=y
-CONFIG_BNX2X=m
-CONFIG_BNX2X_SRIOV=y
-# CONFIG_SYSTEMPORT is not set
-CONFIG_BNXT=m
-CONFIG_BNXT_SRIOV=y
-CONFIG_BNXT_FLOWER_OFFLOAD=y
-CONFIG_BNXT_DCB=y
-# CONFIG_BNXT_HWMON is not set
-# CONFIG_NET_VENDOR_BROCADE is not set
-# CONFIG_NET_VENDOR_CADENCE is not set
-CONFIG_NET_VENDOR_CAVIUM=y
-CONFIG_THUNDER_NIC_PF=m
-CONFIG_THUNDER_NIC_VF=m
-CONFIG_THUNDER_NIC_BGX=m
-CONFIG_THUNDER_NIC_RGX=m
-CONFIG_CAVIUM_PTP=y
-CONFIG_LIQUIDIO=m
-CONFIG_LIQUIDIO_VF=m
-CONFIG_NET_VENDOR_CHELSIO=y
-# CONFIG_CHELSIO_T1 is not set
-# CONFIG_CHELSIO_T3 is not set
-CONFIG_CHELSIO_T4=m
-# CONFIG_CHELSIO_T4_DCB is not set
-CONFIG_CHELSIO_T4VF=m
-CONFIG_CHELSIO_LIB=m
-# CONFIG_NET_VENDOR_CISCO is not set
-# CONFIG_NET_VENDOR_CORTINA is not set
-CONFIG_DNET=m
-# CONFIG_NET_VENDOR_DEC is not set
-# CONFIG_NET_VENDOR_DLINK is not set
-# CONFIG_NET_VENDOR_EMULEX is not set
-# CONFIG_NET_VENDOR_EZCHIP is not set
-CONFIG_NET_VENDOR_HISILICON=y
-# CONFIG_HIX5HD2_GMAC is not set
-# CONFIG_HISI_FEMAC is not set
-# CONFIG_HIP04_ETH is not set
-CONFIG_HNS_MDIO=m
-CONFIG_HNS=m
-CONFIG_HNS_DSAF=m
-CONFIG_HNS_ENET=m
-CONFIG_HNS3=m
-CONFIG_HNS3_HCLGE=m
-CONFIG_HNS3_DCB=y
-CONFIG_HNS3_HCLGEVF=m
-CONFIG_HNS3_ENET=m
-# CONFIG_NET_VENDOR_HP is not set
-CONFIG_NET_VENDOR_HUAWEI=y
-CONFIG_HINIC=m
-# CONFIG_BMA is not set
-# CONFIG_NET_VENDOR_I825XX is not set
-CONFIG_NET_VENDOR_INTEL=y
-# CONFIG_E100 is not set
-CONFIG_E1000=m
-CONFIG_E1000E=m
-CONFIG_IGB=m
-CONFIG_IGB_HWMON=y
-CONFIG_IGBVF=m
-# CONFIG_IXGB is not set
-CONFIG_IXGBE=m
-CONFIG_IXGBE_HWMON=y
-CONFIG_IXGBE_DCB=y
-CONFIG_IXGBEVF=m
-CONFIG_I40E=m
-# CONFIG_I40E_DCB is not set
-CONFIG_I40EVF=m
-CONFIG_ICE=m
-CONFIG_FM10K=m
-# CONFIG_JME is not set
-# CONFIG_NET_VENDOR_MARVELL is not set
-CONFIG_NET_VENDOR_MELLANOX=y
-CONFIG_MLX4_EN=m
-CONFIG_MLX4_EN_DCB=y
-CONFIG_MLX4_CORE=m
-CONFIG_MLX4_DEBUG=y
-# CONFIG_MLX4_CORE_GEN2 is not set
-CONFIG_MLX5_CORE=m
-# CONFIG_MLX5_FPGA is not set
-CONFIG_MLX5_CORE_EN=y
-CONFIG_MLX5_EN_ARFS=y
-CONFIG_MLX5_EN_RXNFC=y
-CONFIG_MLX5_MPFS=y
-# CONFIG_MLX5_ESWITCH is not set
-CONFIG_MLX5_CORE_EN_DCB=y
-CONFIG_MLX5_CORE_IPOIB=y
-CONFIG_MLXSW_CORE=m
-CONFIG_MLXSW_CORE_HWMON=y
-CONFIG_MLXSW_CORE_THERMAL=y
-CONFIG_MLXSW_PCI=m
-CONFIG_MLXSW_I2C=m
-# CONFIG_MLXSW_SWITCHIB is not set
-# CONFIG_MLXSW_SWITCHX2 is not set
-# CONFIG_MLXSW_SPECTRUM is not set
-CONFIG_MLXSW_MINIMAL=m
-CONFIG_MLXFW=m
-# CONFIG_NET_VENDOR_MICREL is not set
-# CONFIG_NET_VENDOR_MICROCHIP is not set
-CONFIG_NET_VENDOR_MICROSEMI=y
-# CONFIG_MSCC_OCELOT_SWITCH is not set
-CONFIG_NET_VENDOR_MYRI=y
-# CONFIG_MYRI10GE is not set
-# CONFIG_FEALNX is not set
-# CONFIG_NET_VENDOR_NATSEMI is not set
-# CONFIG_NET_VENDOR_NETERION is not set
-CONFIG_NET_VENDOR_NETRONOME=y
-CONFIG_NFP=m
-CONFIG_NFP_APP_FLOWER=y
-CONFIG_NFP_APP_ABM_NIC=y
-# CONFIG_NFP_DEBUG is not set
-# CONFIG_NET_VENDOR_NI is not set
-# CONFIG_NET_VENDOR_NVIDIA is not set
-CONFIG_NET_VENDOR_OKI=y
-CONFIG_ETHOC=m
-# CONFIG_NET_VENDOR_PACKET_ENGINES is not set
-CONFIG_NET_VENDOR_QLOGIC=y
-CONFIG_QLA3XXX=m
-# CONFIG_QLCNIC is not set
-# CONFIG_QLGE is not set
-CONFIG_NETXEN_NIC=m
-CONFIG_QED=m
-CONFIG_QED_LL2=y
-CONFIG_QED_SRIOV=y
-CONFIG_QEDE=m
-CONFIG_QED_RDMA=y
-CONFIG_QED_ISCSI=y
-CONFIG_QED_FCOE=y
-CONFIG_QED_OOO=y
-CONFIG_NET_VENDOR_QUALCOMM=y
-# CONFIG_QCA7000_SPI is not set
-CONFIG_QCOM_EMAC=m
-# CONFIG_RMNET is not set
-# CONFIG_NET_VENDOR_RDC is not set
-CONFIG_NET_VENDOR_REALTEK=y
-CONFIG_8139CP=m
-CONFIG_8139TOO=m
-# CONFIG_8139TOO_PIO is not set
-# CONFIG_8139TOO_TUNE_TWISTER is not set
-CONFIG_8139TOO_8129=y
-# CONFIG_8139_OLD_RX_RESET is not set
-CONFIG_R8169=m
-# CONFIG_NET_VENDOR_RENESAS is not set
-CONFIG_NET_VENDOR_ROCKER=y
-CONFIG_ROCKER=m
-# CONFIG_NET_VENDOR_SAMSUNG is not set
-# CONFIG_NET_VENDOR_SEEQ is not set
-CONFIG_NET_VENDOR_SOLARFLARE=y
-CONFIG_SFC=m
-CONFIG_SFC_MTD=y
-CONFIG_SFC_MCDI_MON=y
-CONFIG_SFC_SRIOV=y
-CONFIG_SFC_MCDI_LOGGING=y
-# CONFIG_SFC_FALCON is not set
-# CONFIG_NET_VENDOR_SILAN is not set
-# CONFIG_NET_VENDOR_SIS is not set
-CONFIG_NET_VENDOR_SMSC=y
-CONFIG_SMC91X=m
-CONFIG_EPIC100=m
-CONFIG_SMSC911X=m
-CONFIG_SMSC9420=m
-# CONFIG_NET_VENDOR_SOCIONEXT is not set
-# CONFIG_NET_VENDOR_STMICRO is not set
-# CONFIG_NET_VENDOR_SUN is not set
-# CONFIG_NET_VENDOR_SYNOPSYS is not set
-# CONFIG_NET_VENDOR_TEHUTI is not set
-# CONFIG_NET_VENDOR_TI is not set
-# CONFIG_NET_VENDOR_VIA is not set
-# CONFIG_NET_VENDOR_WIZNET is not set
-# CONFIG_FDDI is not set
-# CONFIG_HIPPI is not set
-# CONFIG_NET_SB1000 is not set
-CONFIG_MDIO_DEVICE=y
-CONFIG_MDIO_BUS=y
-CONFIG_MDIO_BCM_UNIMAC=m
-CONFIG_MDIO_BITBANG=m
-# CONFIG_MDIO_BUS_MUX_GPIO is not set
-# CONFIG_MDIO_BUS_MUX_MMIOREG is not set
-CONFIG_MDIO_CAVIUM=m
-CONFIG_MDIO_GPIO=m
-# CONFIG_MDIO_HISI_FEMAC is not set
-# CONFIG_MDIO_MSCC_MIIM is not set
-CONFIG_MDIO_OCTEON=m
-CONFIG_MDIO_THUNDER=m
-CONFIG_MDIO_XGENE=y
-CONFIG_PHYLIB=y
-CONFIG_SWPHY=y
-# CONFIG_LED_TRIGGER_PHY is not set
-
-#
-# MII PHY device drivers
-#
-CONFIG_AMD_PHY=m
-CONFIG_AQUANTIA_PHY=m
-# CONFIG_ASIX_PHY is not set
-CONFIG_AT803X_PHY=m
-# CONFIG_BCM7XXX_PHY is not set
-CONFIG_BCM87XX_PHY=m
-CONFIG_BCM_NET_PHYLIB=m
-CONFIG_BROADCOM_PHY=m
-CONFIG_CICADA_PHY=m
-# CONFIG_CORTINA_PHY is not set
-CONFIG_DAVICOM_PHY=m
-# CONFIG_DP83822_PHY is not set
-# CONFIG_DP83TC811_PHY is not set
-CONFIG_DP83848_PHY=m
-CONFIG_DP83867_PHY=m
-CONFIG_FIXED_PHY=y
-CONFIG_ICPLUS_PHY=m
-# CONFIG_INTEL_XWAY_PHY is not set
-CONFIG_LSI_ET1011C_PHY=m
-CONFIG_LXT_PHY=m
-CONFIG_MARVELL_PHY=m
-# CONFIG_MARVELL_10G_PHY is not set
-CONFIG_MICREL_PHY=m
-CONFIG_MICROCHIP_PHY=m
-# CONFIG_MICROCHIP_T1_PHY is not set
-# CONFIG_MICROSEMI_PHY is not set
-CONFIG_NATIONAL_PHY=m
-CONFIG_QSEMI_PHY=m
-CONFIG_REALTEK_PHY=m
-# CONFIG_RENESAS_PHY is not set
-# CONFIG_ROCKCHIP_PHY is not set
-CONFIG_SMSC_PHY=m
-CONFIG_STE10XP=m
-CONFIG_TERANETICS_PHY=m
-CONFIG_VITESSE_PHY=m
-# CONFIG_XILINX_GMII2RGMII is not set
-# CONFIG_MICREL_KS8995MA is not set
-CONFIG_PPP=m
-CONFIG_PPP_BSDCOMP=m
-CONFIG_PPP_DEFLATE=m
-CONFIG_PPP_FILTER=y
-CONFIG_PPP_MPPE=m
-CONFIG_PPP_MULTILINK=y
-CONFIG_PPPOATM=m
-CONFIG_PPPOE=m
-CONFIG_PPTP=m
-CONFIG_PPPOL2TP=m
-CONFIG_PPP_ASYNC=m
-CONFIG_PPP_SYNC_TTY=m
-CONFIG_SLIP=m
-CONFIG_SLHC=m
-CONFIG_SLIP_COMPRESSED=y
-CONFIG_SLIP_SMART=y
-# CONFIG_SLIP_MODE_SLIP6 is not set
-CONFIG_USB_NET_DRIVERS=y
-CONFIG_USB_CATC=m
-CONFIG_USB_KAWETH=m
-CONFIG_USB_PEGASUS=m
-CONFIG_USB_RTL8150=m
-CONFIG_USB_RTL8152=m
-CONFIG_USB_LAN78XX=m
-CONFIG_USB_USBNET=m
-CONFIG_USB_NET_AX8817X=m
-CONFIG_USB_NET_AX88179_178A=m
-CONFIG_USB_NET_CDCETHER=m
-CONFIG_USB_NET_CDC_EEM=m
-CONFIG_USB_NET_CDC_NCM=m
-CONFIG_USB_NET_HUAWEI_CDC_NCM=m
-CONFIG_USB_NET_CDC_MBIM=m
-CONFIG_USB_NET_DM9601=m
-CONFIG_USB_NET_SR9700=m
-# CONFIG_USB_NET_SR9800 is not set
-CONFIG_USB_NET_SMSC75XX=m
-CONFIG_USB_NET_SMSC95XX=m
-CONFIG_USB_NET_GL620A=m
-CONFIG_USB_NET_NET1080=m
-CONFIG_USB_NET_PLUSB=m
-CONFIG_USB_NET_MCS7830=m
-CONFIG_USB_NET_RNDIS_HOST=m
-CONFIG_USB_NET_CDC_SUBSET_ENABLE=m
-CONFIG_USB_NET_CDC_SUBSET=m
-CONFIG_USB_ALI_M5632=y
-CONFIG_USB_AN2720=y
-CONFIG_USB_BELKIN=y
-CONFIG_USB_ARMLINUX=y
-CONFIG_USB_EPSON2888=y
-CONFIG_USB_KC2190=y
-CONFIG_USB_NET_ZAURUS=m
-CONFIG_USB_NET_CX82310_ETH=m
-CONFIG_USB_NET_KALMIA=m
-CONFIG_USB_NET_QMI_WWAN=m
-CONFIG_USB_HSO=m
-CONFIG_USB_NET_INT51X1=m
-CONFIG_USB_IPHETH=m
-CONFIG_USB_SIERRA_NET=m
-CONFIG_USB_VL600=m
-CONFIG_USB_NET_CH9200=m
-CONFIG_WLAN=y
-# CONFIG_WIRELESS_WDS is not set
-# CONFIG_WLAN_VENDOR_ADMTEK is not set
-CONFIG_ATH_COMMON=m
-CONFIG_WLAN_VENDOR_ATH=y
-# CONFIG_ATH_DEBUG is not set
-# CONFIG_ATH5K is not set
-# CONFIG_ATH5K_PCI is not set
-# CONFIG_ATH9K is not set
-# CONFIG_ATH9K_HTC is not set
-# CONFIG_CARL9170 is not set
-# CONFIG_ATH6KL is not set
-# CONFIG_AR5523 is not set
-# CONFIG_WIL6210 is not set
-CONFIG_ATH10K=m
-CONFIG_ATH10K_CE=y
-CONFIG_ATH10K_PCI=m
-# CONFIG_ATH10K_AHB is not set
-# CONFIG_ATH10K_SDIO is not set
-# CONFIG_ATH10K_USB is not set
-# CONFIG_ATH10K_SNOC is not set
-# CONFIG_ATH10K_DEBUG is not set
-# CONFIG_ATH10K_DEBUGFS is not set
-# CONFIG_ATH10K_TRACING is not set
-# CONFIG_WCN36XX is not set
-# CONFIG_WLAN_VENDOR_ATMEL is not set
-# CONFIG_WLAN_VENDOR_BROADCOM is not set
-# CONFIG_WLAN_VENDOR_CISCO is not set
-# CONFIG_WLAN_VENDOR_INTEL is not set
-# CONFIG_WLAN_VENDOR_INTERSIL is not set
-# CONFIG_WLAN_VENDOR_MARVELL is not set
-# CONFIG_WLAN_VENDOR_MEDIATEK is not set
-CONFIG_WLAN_VENDOR_RALINK=y
-CONFIG_RT2X00=m
-# CONFIG_RT2400PCI is not set
-# CONFIG_RT2500PCI is not set
-# CONFIG_RT61PCI is not set
-# CONFIG_RT2800PCI is not set
-# CONFIG_RT2500USB is not set
-# CONFIG_RT73USB is not set
-CONFIG_RT2800USB=m
-CONFIG_RT2800USB_RT33XX=y
-CONFIG_RT2800USB_RT35XX=y
-# CONFIG_RT2800USB_RT3573 is not set
-CONFIG_RT2800USB_RT53XX=y
-# CONFIG_RT2800USB_RT55XX is not set
-# CONFIG_RT2800USB_UNKNOWN is not set
-CONFIG_RT2800_LIB=m
-CONFIG_RT2X00_LIB_USB=m
-CONFIG_RT2X00_LIB=m
-CONFIG_RT2X00_LIB_FIRMWARE=y
-CONFIG_RT2X00_LIB_CRYPTO=y
-CONFIG_RT2X00_LIB_LEDS=y
-# CONFIG_RT2X00_LIB_DEBUGFS is not set
-# CONFIG_RT2X00_DEBUG is not set
-# CONFIG_WLAN_VENDOR_REALTEK is not set
-# CONFIG_WLAN_VENDOR_RSI is not set
-# CONFIG_WLAN_VENDOR_ST is not set
-# CONFIG_WLAN_VENDOR_TI is not set
-# CONFIG_WLAN_VENDOR_ZYDAS is not set
-# CONFIG_WLAN_VENDOR_QUANTENNA is not set
-# CONFIG_MAC80211_HWSIM is not set
-# CONFIG_USB_NET_RNDIS_WLAN is not set
-
-#
-# Enable WiMAX (Networking options) to see the WiMAX drivers
-#
-CONFIG_WAN=y
-CONFIG_HDLC=m
-CONFIG_HDLC_RAW=m
-# CONFIG_HDLC_RAW_ETH is not set
-CONFIG_HDLC_CISCO=m
-CONFIG_HDLC_FR=m
-CONFIG_HDLC_PPP=m
-
-#
-# X.25/LAPB support is disabled
-#
-# CONFIG_PCI200SYN is not set
-# CONFIG_WANXL is not set
-# CONFIG_PC300TOO is not set
-# CONFIG_FARSYNC is not set
-# CONFIG_DSCC4 is not set
-CONFIG_DLCI=m
-CONFIG_DLCI_MAX=8
-# CONFIG_IEEE802154_DRIVERS is not set
-# CONFIG_VMXNET3 is not set
-# CONFIG_FUJITSU_ES is not set
-# CONFIG_NETDEVSIM is not set
-CONFIG_NET_FAILOVER=m
-# CONFIG_ISDN is not set
-# CONFIG_NVM is not set
-
-#
-# Input device support
-#
-CONFIG_INPUT=y
-CONFIG_INPUT_LEDS=y
-CONFIG_INPUT_FF_MEMLESS=m
-CONFIG_INPUT_POLLDEV=m
-CONFIG_INPUT_SPARSEKMAP=m
-# CONFIG_INPUT_MATRIXKMAP is not set
-
-#
-# Userland interfaces
-#
-CONFIG_INPUT_MOUSEDEV=y
-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
-CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
-# CONFIG_INPUT_JOYDEV is not set
-CONFIG_INPUT_EVDEV=y
-# CONFIG_INPUT_EVBUG is not set
-
-#
-# Input Device Drivers
-#
-CONFIG_INPUT_KEYBOARD=y
-# CONFIG_KEYBOARD_ADP5588 is not set
-# CONFIG_KEYBOARD_ADP5589 is not set
-CONFIG_KEYBOARD_ATKBD=y
-# CONFIG_KEYBOARD_QT1070 is not set
-# CONFIG_KEYBOARD_QT2160 is not set
-# CONFIG_KEYBOARD_DLINK_DIR685 is not set
-# CONFIG_KEYBOARD_LKKBD is not set
-# CONFIG_KEYBOARD_GPIO is not set
-# CONFIG_KEYBOARD_GPIO_POLLED is not set
-# CONFIG_KEYBOARD_TCA6416 is not set
-# CONFIG_KEYBOARD_TCA8418 is not set
-# CONFIG_KEYBOARD_MATRIX is not set
-# CONFIG_KEYBOARD_LM8323 is not set
-# CONFIG_KEYBOARD_LM8333 is not set
-# CONFIG_KEYBOARD_MAX7359 is not set
-# CONFIG_KEYBOARD_MCS is not set
-# CONFIG_KEYBOARD_MPR121 is not set
-# CONFIG_KEYBOARD_NEWTON is not set
-# CONFIG_KEYBOARD_OPENCORES is not set
-# CONFIG_KEYBOARD_SAMSUNG is not set
-# CONFIG_KEYBOARD_STOWAWAY is not set
-# CONFIG_KEYBOARD_SUNKBD is not set
-# CONFIG_KEYBOARD_OMAP4 is not set
-# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
-# CONFIG_KEYBOARD_XTKBD is not set
-# CONFIG_KEYBOARD_CAP11XX is not set
-# CONFIG_KEYBOARD_BCM is not set
-CONFIG_INPUT_MOUSE=y
-CONFIG_MOUSE_PS2=y
-CONFIG_MOUSE_PS2_ALPS=y
-CONFIG_MOUSE_PS2_BYD=y
-CONFIG_MOUSE_PS2_LOGIPS2PP=y
-CONFIG_MOUSE_PS2_SYNAPTICS=y
-CONFIG_MOUSE_PS2_CYPRESS=y
-CONFIG_MOUSE_PS2_TRACKPOINT=y
-CONFIG_MOUSE_PS2_ELANTECH=y
-CONFIG_MOUSE_PS2_SENTELIC=y
-# CONFIG_MOUSE_PS2_TOUCHKIT is not set
-CONFIG_MOUSE_PS2_FOCALTECH=y
-CONFIG_MOUSE_SERIAL=m
-CONFIG_MOUSE_APPLETOUCH=m
-CONFIG_MOUSE_BCM5974=m
-CONFIG_MOUSE_CYAPA=m
-# CONFIG_MOUSE_ELAN_I2C is not set
-CONFIG_MOUSE_VSXXXAA=m
-# CONFIG_MOUSE_GPIO is not set
-CONFIG_MOUSE_SYNAPTICS_I2C=m
-CONFIG_MOUSE_SYNAPTICS_USB=m
-# CONFIG_INPUT_JOYSTICK is not set
-# CONFIG_INPUT_TABLET is not set
-# CONFIG_INPUT_TOUCHSCREEN is not set
-# CONFIG_INPUT_MISC is not set
-CONFIG_RMI4_CORE=m
-CONFIG_RMI4_I2C=m
-CONFIG_RMI4_SPI=m
-CONFIG_RMI4_SMB=m
-CONFIG_RMI4_F03=y
-CONFIG_RMI4_F03_SERIO=m
-CONFIG_RMI4_2D_SENSOR=y
-CONFIG_RMI4_F11=y
-CONFIG_RMI4_F12=y
-CONFIG_RMI4_F30=y
-# CONFIG_RMI4_F34 is not set
-# CONFIG_RMI4_F55 is not set
-
-#
-# Hardware I/O ports
-#
-CONFIG_SERIO=y
-CONFIG_SERIO_SERPORT=y
-CONFIG_SERIO_AMBAKMI=y
-# CONFIG_SERIO_PCIPS2 is not set
-CONFIG_SERIO_LIBPS2=y
-CONFIG_SERIO_RAW=m
-CONFIG_SERIO_ALTERA_PS2=m
-# CONFIG_SERIO_PS2MULT is not set
-CONFIG_SERIO_ARC_PS2=m
-# CONFIG_SERIO_APBPS2 is not set
-# CONFIG_SERIO_GPIO_PS2 is not set
-# CONFIG_USERIO is not set
-# CONFIG_GAMEPORT is not set
-
-#
-# Character devices
-#
-CONFIG_TTY=y
-CONFIG_VT=y
-CONFIG_CONSOLE_TRANSLATIONS=y
-CONFIG_VT_CONSOLE=y
-CONFIG_VT_CONSOLE_SLEEP=y
-CONFIG_HW_CONSOLE=y
-CONFIG_VT_HW_CONSOLE_BINDING=y
-CONFIG_UNIX98_PTYS=y
-# CONFIG_LEGACY_PTYS is not set
-CONFIG_SERIAL_NONSTANDARD=y
-# CONFIG_ROCKETPORT is not set
-CONFIG_CYCLADES=m
-# CONFIG_CYZ_INTR is not set
-# CONFIG_MOXA_INTELLIO is not set
-# CONFIG_MOXA_SMARTIO is not set
-CONFIG_SYNCLINKMP=m
-CONFIG_SYNCLINK_GT=m
-# CONFIG_NOZOMI is not set
-# CONFIG_ISI is not set
-CONFIG_N_HDLC=m
-CONFIG_N_GSM=m
-# CONFIG_TRACE_SINK is not set
-CONFIG_DEVMEM=y
-
-#
-# Serial drivers
-#
-CONFIG_SERIAL_EARLYCON=y
-CONFIG_SERIAL_8250=y
-# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
-CONFIG_SERIAL_8250_PNP=y
-# CONFIG_SERIAL_8250_FINTEK is not set
-CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_8250_DMA=y
-CONFIG_SERIAL_8250_PCI=y
-CONFIG_SERIAL_8250_EXAR=y
-CONFIG_SERIAL_8250_NR_UARTS=32
-CONFIG_SERIAL_8250_RUNTIME_UARTS=4
-CONFIG_SERIAL_8250_EXTENDED=y
-CONFIG_SERIAL_8250_MANY_PORTS=y
-# CONFIG_SERIAL_8250_ASPEED_VUART is not set
-CONFIG_SERIAL_8250_SHARE_IRQ=y
-# CONFIG_SERIAL_8250_DETECT_IRQ is not set
-CONFIG_SERIAL_8250_RSA=y
-CONFIG_SERIAL_8250_FSL=y
-CONFIG_SERIAL_8250_DW=y
-CONFIG_SERIAL_8250_RT288X=y
-# CONFIG_SERIAL_8250_MOXA is not set
-CONFIG_SERIAL_OF_PLATFORM=y
-
-#
-# Non-8250 serial port support
-#
-# CONFIG_SERIAL_AMBA_PL010 is not set
-CONFIG_SERIAL_AMBA_PL011=y
-CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
-CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST=y
-# CONFIG_SERIAL_KGDB_NMI is not set
-# CONFIG_SERIAL_MAX3100 is not set
-# CONFIG_SERIAL_MAX310X is not set
-# CONFIG_SERIAL_UARTLITE is not set
-CONFIG_SERIAL_CORE=y
-CONFIG_SERIAL_CORE_CONSOLE=y
-CONFIG_CONSOLE_POLL=y
-# CONFIG_SERIAL_JSM is not set
-# CONFIG_SERIAL_MSM is not set
-# CONFIG_SERIAL_SCCNXP is not set
-# CONFIG_SERIAL_SC16IS7XX is not set
-# CONFIG_SERIAL_ALTERA_JTAGUART is not set
-# CONFIG_SERIAL_ALTERA_UART is not set
-# CONFIG_SERIAL_IFX6X60 is not set
-# CONFIG_SERIAL_XILINX_PS_UART is not set
-# CONFIG_SERIAL_ARC is not set
-# CONFIG_SERIAL_RP2 is not set
-# CONFIG_SERIAL_FSL_LPUART is not set
-# CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set
-# CONFIG_SERIAL_DEV_BUS is not set
-# CONFIG_TTY_PRINTK is not set
-CONFIG_HVC_DRIVER=y
-# CONFIG_HVC_DCC is not set
-CONFIG_VIRTIO_CONSOLE=m
-CONFIG_IPMI_HANDLER=m
-CONFIG_IPMI_DMI_DECODE=y
-# CONFIG_IPMI_PANIC_EVENT is not set
-CONFIG_IPMI_DEVICE_INTERFACE=m
-CONFIG_IPMI_SI=m
-CONFIG_IPMI_SSIF=m
-CONFIG_IPMI_WATCHDOG=m
-CONFIG_IPMI_POWEROFF=m
-CONFIG_HW_RANDOM=y
-CONFIG_HW_RANDOM_TIMERIOMEM=m
-CONFIG_HW_RANDOM_VIRTIO=m
-CONFIG_HW_RANDOM_HISI=y
-CONFIG_HW_RANDOM_XGENE=y
-CONFIG_HW_RANDOM_CAVIUM=y
-# CONFIG_R3964 is not set
-# CONFIG_APPLICOM is not set
-
-#
-# PCMCIA character devices
-#
-CONFIG_RAW_DRIVER=y
-CONFIG_MAX_RAW_DEVS=8192
-CONFIG_TCG_TPM=m
-CONFIG_HW_RANDOM_TPM=y
-CONFIG_TCG_TIS_CORE=m
-# CONFIG_TCG_TIS is not set
-CONFIG_TCG_TIS_SPI=m
-# CONFIG_TCG_TIS_I2C_ATMEL is not set
-# CONFIG_TCG_TIS_I2C_INFINEON is not set
-# CONFIG_TCG_TIS_I2C_NUVOTON is not set
-CONFIG_TCG_ATMEL=m
-# CONFIG_TCG_INFINEON is not set
-CONFIG_TCG_CRB=m
-# CONFIG_TCG_VTPM_PROXY is not set
-# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
-# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
-# CONFIG_DEVPORT is not set
-# CONFIG_XILLYBUS is not set
-
-#
-# I2C support
-#
-CONFIG_I2C=m
-CONFIG_I2C_BOARDINFO=y
-CONFIG_I2C_COMPAT=y
-CONFIG_I2C_CHARDEV=m
-CONFIG_I2C_MUX=m
-
-#
-# Multiplexer I2C Chip support
-#
-CONFIG_I2C_ARB_GPIO_CHALLENGE=m
-CONFIG_I2C_MUX_GPIO=m
-# CONFIG_I2C_MUX_GPMUX is not set
-# CONFIG_I2C_MUX_LTC4306 is not set
-CONFIG_I2C_MUX_PCA9541=m
-CONFIG_I2C_MUX_PCA954x=m
-CONFIG_I2C_MUX_PINCTRL=m
-# CONFIG_I2C_MUX_REG is not set
-# CONFIG_I2C_DEMUX_PINCTRL is not set
-CONFIG_I2C_MUX_MLXCPLD=m
-# CONFIG_I2C_HELPER_AUTO is not set
-CONFIG_I2C_SMBUS=m
-
-#
-# I2C Algorithms
-#
-CONFIG_I2C_ALGOBIT=m
-# CONFIG_I2C_ALGOPCF is not set
-CONFIG_I2C_ALGOPCA=m
-
-#
-# I2C Hardware Bus support
-#
-
-#
-# PC SMBus host controller drivers
-#
-# CONFIG_I2C_ALI1535 is not set
-# CONFIG_I2C_ALI1563 is not set
-# CONFIG_I2C_ALI15X3 is not set
-# CONFIG_I2C_AMD756 is not set
-# CONFIG_I2C_AMD8111 is not set
-# CONFIG_I2C_HIX5HD2 is not set
-# CONFIG_I2C_I801 is not set
-# CONFIG_I2C_ISCH is not set
-# CONFIG_I2C_PIIX4 is not set
-CONFIG_I2C_NFORCE2=m
-# CONFIG_I2C_SIS5595 is not set
-# CONFIG_I2C_SIS630 is not set
-# CONFIG_I2C_SIS96X is not set
-# CONFIG_I2C_VIA is not set
-# CONFIG_I2C_VIAPRO is not set
-
-#
-# ACPI drivers
-#
-# CONFIG_I2C_SCMI is not set
-
-#
-# I2C system bus drivers (mostly embedded / system-on-chip)
-#
-# CONFIG_I2C_CADENCE is not set
-# CONFIG_I2C_CBUS_GPIO is not set
-CONFIG_I2C_DESIGNWARE_CORE=m
-CONFIG_I2C_DESIGNWARE_PLATFORM=m
-# CONFIG_I2C_DESIGNWARE_SLAVE is not set
-# CONFIG_I2C_DESIGNWARE_PCI is not set
-# CONFIG_I2C_EMEV2 is not set
-CONFIG_I2C_GPIO=m
-# CONFIG_I2C_GPIO_FAULT_INJECTOR is not set
-# CONFIG_I2C_NOMADIK is not set
-# CONFIG_I2C_OCORES is not set
-CONFIG_I2C_PCA_PLATFORM=m
-CONFIG_I2C_QUP=m
-# CONFIG_I2C_RK3X is not set
-CONFIG_I2C_SIMTEC=m
-CONFIG_I2C_VERSATILE=m
-CONFIG_I2C_THUNDERX=m
-# CONFIG_I2C_XILINX is not set
-CONFIG_I2C_XLP9XX=m
-
-#
-# External I2C/SMBus adapter drivers
-#
-CONFIG_I2C_DIOLAN_U2C=m
-CONFIG_I2C_PARPORT_LIGHT=m
-# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
-# CONFIG_I2C_TAOS_EVM is not set
-CONFIG_I2C_TINY_USB=m
-
-#
-# Other I2C/SMBus bus drivers
-#
-CONFIG_I2C_XGENE_SLIMPRO=m
-CONFIG_I2C_STUB=m
-CONFIG_I2C_SLAVE=y
-CONFIG_I2C_SLAVE_EEPROM=m
-# CONFIG_I2C_DEBUG_CORE is not set
-# CONFIG_I2C_DEBUG_ALGO is not set
-# CONFIG_I2C_DEBUG_BUS is not set
-CONFIG_SPI=y
-# CONFIG_SPI_DEBUG is not set
-CONFIG_SPI_MASTER=y
-# CONFIG_SPI_MEM is not set
-
-#
-# SPI Master Controller Drivers
-#
-# CONFIG_SPI_ALTERA is not set
-# CONFIG_SPI_AXI_SPI_ENGINE is not set
-# CONFIG_SPI_BITBANG is not set
-CONFIG_SPI_CADENCE=m
-CONFIG_SPI_DESIGNWARE=y
-CONFIG_SPI_DW_PCI=m
-# CONFIG_SPI_DW_MID_DMA is not set
-CONFIG_SPI_DW_MMIO=m
-# CONFIG_SPI_GPIO is not set
-# CONFIG_SPI_FSL_SPI is not set
-# CONFIG_SPI_OC_TINY is not set
-CONFIG_SPI_PL022=y
-# CONFIG_SPI_PXA2XX is not set
-# CONFIG_SPI_ROCKCHIP is not set
-CONFIG_SPI_QUP=y
-# CONFIG_SPI_SC18IS602 is not set
-# CONFIG_SPI_THUNDERX is not set
-# CONFIG_SPI_XCOMM is not set
-# CONFIG_SPI_XILINX is not set
-CONFIG_SPI_XLP=m
-# CONFIG_SPI_ZYNQMP_GQSPI is not set
-
-#
-# SPI Protocol Masters
-#
-# CONFIG_SPI_SPIDEV is not set
-# CONFIG_SPI_LOOPBACK_TEST is not set
-# CONFIG_SPI_TLE62X0 is not set
-# CONFIG_SPI_SLAVE is not set
-# CONFIG_SPMI is not set
-# CONFIG_HSI is not set
-CONFIG_PPS=y
-# CONFIG_PPS_DEBUG is not set
-
-#
-# PPS clients support
-#
-# CONFIG_PPS_CLIENT_KTIMER is not set
-CONFIG_PPS_CLIENT_LDISC=m
-CONFIG_PPS_CLIENT_GPIO=m
-
-#
-# PPS generators support
-#
-
-#
-# PTP clock support
-#
-CONFIG_PTP_1588_CLOCK=y
-CONFIG_DP83640_PHY=m
-CONFIG_PINCTRL=y
-CONFIG_PINMUX=y
-CONFIG_PINCONF=y
-CONFIG_GENERIC_PINCONF=y
-# CONFIG_DEBUG_PINCTRL is not set
-# CONFIG_PINCTRL_AMD is not set
-# CONFIG_PINCTRL_MCP23S08 is not set
-# CONFIG_PINCTRL_SINGLE is not set
-CONFIG_PINCTRL_MSM=y
-# CONFIG_PINCTRL_APQ8064 is not set
-# CONFIG_PINCTRL_APQ8084 is not set
-# CONFIG_PINCTRL_IPQ4019 is not set
-# CONFIG_PINCTRL_IPQ8064 is not set
-# CONFIG_PINCTRL_IPQ8074 is not set
-# CONFIG_PINCTRL_MSM8660 is not set
-# CONFIG_PINCTRL_MSM8960 is not set
-# CONFIG_PINCTRL_MDM9615 is not set
-# CONFIG_PINCTRL_MSM8X74 is not set
-# CONFIG_PINCTRL_MSM8916 is not set
-# CONFIG_PINCTRL_MSM8994 is not set
-# CONFIG_PINCTRL_MSM8996 is not set
-# CONFIG_PINCTRL_MSM8998 is not set
-CONFIG_PINCTRL_QDF2XXX=y
-# CONFIG_PINCTRL_QCOM_SSBI_PMIC is not set
-# CONFIG_PINCTRL_SDM845 is not set
-CONFIG_GPIOLIB=y
-CONFIG_GPIOLIB_FASTPATH_LIMIT=512
-CONFIG_OF_GPIO=y
-CONFIG_GPIO_ACPI=y
-CONFIG_GPIOLIB_IRQCHIP=y
-# CONFIG_DEBUG_GPIO is not set
-CONFIG_GPIO_SYSFS=y
-CONFIG_GPIO_GENERIC=m
-
-#
-# Memory mapped GPIO drivers
-#
-# CONFIG_GPIO_74XX_MMIO is not set
-# CONFIG_GPIO_ALTERA is not set
-CONFIG_GPIO_AMDPT=m
-# CONFIG_GPIO_DWAPB is not set
-# CONFIG_GPIO_EXAR is not set
-# CONFIG_GPIO_FTGPIO010 is not set
-CONFIG_GPIO_GENERIC_PLATFORM=m
-# CONFIG_GPIO_GRGPIO is not set
-# CONFIG_GPIO_HLWD is not set
-# CONFIG_GPIO_MB86S7X is not set
-# CONFIG_GPIO_MOCKUP is not set
-CONFIG_GPIO_PL061=y
-# CONFIG_GPIO_SYSCON is not set
-# CONFIG_GPIO_THUNDERX is not set
-CONFIG_GPIO_XGENE=y
-CONFIG_GPIO_XGENE_SB=m
-# CONFIG_GPIO_XILINX is not set
-CONFIG_GPIO_XLP=m
-
-#
-# I2C GPIO expanders
-#
-# CONFIG_GPIO_ADP5588 is not set
-# CONFIG_GPIO_ADNP is not set
-# CONFIG_GPIO_MAX7300 is not set
-# CONFIG_GPIO_MAX732X is not set
-# CONFIG_GPIO_PCA953X is not set
-# CONFIG_GPIO_PCF857X is not set
-# CONFIG_GPIO_TPIC2810 is not set
-
-#
-# MFD GPIO expanders
-#
-
-#
-# PCI GPIO expanders
-#
-# CONFIG_GPIO_BT8XX is not set
-# CONFIG_GPIO_PCI_IDIO_16 is not set
-# CONFIG_GPIO_PCIE_IDIO_24 is not set
-# CONFIG_GPIO_RDC321X is not set
-
-#
-# SPI GPIO expanders
-#
-# CONFIG_GPIO_74X164 is not set
-# CONFIG_GPIO_MAX3191X is not set
-# CONFIG_GPIO_MAX7301 is not set
-# CONFIG_GPIO_MC33880 is not set
-# CONFIG_GPIO_PISOSR is not set
-# CONFIG_GPIO_XRA1403 is not set
-
-#
-# USB GPIO expanders
-#
-# CONFIG_W1 is not set
-# CONFIG_POWER_AVS is not set
-CONFIG_POWER_RESET=y
-# CONFIG_POWER_RESET_BRCMSTB is not set
-CONFIG_POWER_RESET_GPIO=y
-CONFIG_POWER_RESET_GPIO_RESTART=y
-CONFIG_POWER_RESET_HISI=y
-# CONFIG_POWER_RESET_MSM is not set
-# CONFIG_POWER_RESET_LTC2952 is not set
-CONFIG_POWER_RESET_RESTART=y
-CONFIG_POWER_RESET_VEXPRESS=y
-# CONFIG_POWER_RESET_XGENE is not set
-CONFIG_POWER_RESET_SYSCON=y
-# CONFIG_POWER_RESET_SYSCON_POWEROFF is not set
-# CONFIG_SYSCON_REBOOT_MODE is not set
-CONFIG_POWER_SUPPLY=y
-# CONFIG_POWER_SUPPLY_DEBUG is not set
-# CONFIG_PDA_POWER is not set
-# CONFIG_TEST_POWER is not set
-# CONFIG_CHARGER_ADP5061 is not set
-# CONFIG_BATTERY_DS2780 is not set
-# CONFIG_BATTERY_DS2781 is not set
-# CONFIG_BATTERY_DS2782 is not set
-# CONFIG_BATTERY_SBS is not set
-# CONFIG_CHARGER_SBS is not set
-# CONFIG_MANAGER_SBS is not set
-# CONFIG_BATTERY_BQ27XXX is not set
-# CONFIG_BATTERY_MAX17040 is not set
-# CONFIG_BATTERY_MAX17042 is not set
-# CONFIG_CHARGER_MAX8903 is not set
-# CONFIG_CHARGER_LP8727 is not set
-# CONFIG_CHARGER_GPIO is not set
-# CONFIG_CHARGER_LTC3651 is not set
-# CONFIG_CHARGER_DETECTOR_MAX14656 is not set
-# CONFIG_CHARGER_BQ2415X is not set
-# CONFIG_CHARGER_BQ24190 is not set
-# CONFIG_CHARGER_BQ24257 is not set
-# CONFIG_CHARGER_BQ24735 is not set
-# CONFIG_CHARGER_BQ25890 is not set
-CONFIG_CHARGER_SMB347=m
-# CONFIG_BATTERY_GAUGE_LTC2941 is not set
-# CONFIG_CHARGER_RT9455 is not set
-CONFIG_HWMON=y
-CONFIG_HWMON_VID=m
-# CONFIG_HWMON_DEBUG_CHIP is not set
-
-#
-# Native drivers
-#
-CONFIG_SENSORS_AD7314=m
-CONFIG_SENSORS_AD7414=m
-CONFIG_SENSORS_AD7418=m
-CONFIG_SENSORS_ADM1021=m
-CONFIG_SENSORS_ADM1025=m
-CONFIG_SENSORS_ADM1026=m
-CONFIG_SENSORS_ADM1029=m
-CONFIG_SENSORS_ADM1031=m
-CONFIG_SENSORS_ADM9240=m
-CONFIG_SENSORS_ADT7X10=m
-CONFIG_SENSORS_ADT7310=m
-CONFIG_SENSORS_ADT7410=m
-CONFIG_SENSORS_ADT7411=m
-CONFIG_SENSORS_ADT7462=m
-CONFIG_SENSORS_ADT7470=m
-CONFIG_SENSORS_ADT7475=m
-CONFIG_SENSORS_ASC7621=m
-CONFIG_SENSORS_ARM_SCPI=m
-# CONFIG_SENSORS_ASPEED is not set
-CONFIG_SENSORS_ATXP1=m
-CONFIG_SENSORS_DS620=m
-CONFIG_SENSORS_DS1621=m
-# CONFIG_SENSORS_I5K_AMB is not set
-CONFIG_SENSORS_F71805F=m
-CONFIG_SENSORS_F71882FG=m
-CONFIG_SENSORS_F75375S=m
-# CONFIG_SENSORS_FTSTEUTATES is not set
-CONFIG_SENSORS_GL518SM=m
-CONFIG_SENSORS_GL520SM=m
-CONFIG_SENSORS_G760A=m
-CONFIG_SENSORS_G762=m
-# CONFIG_SENSORS_GPIO_FAN is not set
-# CONFIG_SENSORS_HIH6130 is not set
-CONFIG_SENSORS_IBMAEM=m
-CONFIG_SENSORS_IBMPEX=m
-CONFIG_SENSORS_IT87=m
-CONFIG_SENSORS_JC42=m
-CONFIG_SENSORS_POWR1220=m
-CONFIG_SENSORS_LINEAGE=m
-CONFIG_SENSORS_LTC2945=m
-# CONFIG_SENSORS_LTC2990 is not set
-CONFIG_SENSORS_LTC4151=m
-CONFIG_SENSORS_LTC4215=m
-CONFIG_SENSORS_LTC4222=m
-CONFIG_SENSORS_LTC4245=m
-CONFIG_SENSORS_LTC4260=m
-CONFIG_SENSORS_LTC4261=m
-CONFIG_SENSORS_MAX1111=m
-CONFIG_SENSORS_MAX16065=m
-CONFIG_SENSORS_MAX1619=m
-CONFIG_SENSORS_MAX1668=m
-CONFIG_SENSORS_MAX197=m
-# CONFIG_SENSORS_MAX31722 is not set
-# CONFIG_SENSORS_MAX6621 is not set
-CONFIG_SENSORS_MAX6639=m
-CONFIG_SENSORS_MAX6642=m
-CONFIG_SENSORS_MAX6650=m
-CONFIG_SENSORS_MAX6697=m
-CONFIG_SENSORS_MAX31790=m
-CONFIG_SENSORS_MCP3021=m
-# CONFIG_SENSORS_TC654 is not set
-CONFIG_SENSORS_ADCXX=m
-CONFIG_SENSORS_LM63=m
-CONFIG_SENSORS_LM70=m
-CONFIG_SENSORS_LM73=m
-CONFIG_SENSORS_LM75=m
-CONFIG_SENSORS_LM77=m
-CONFIG_SENSORS_LM78=m
-CONFIG_SENSORS_LM80=m
-CONFIG_SENSORS_LM83=m
-CONFIG_SENSORS_LM85=m
-CONFIG_SENSORS_LM87=m
-CONFIG_SENSORS_LM90=m
-CONFIG_SENSORS_LM92=m
-CONFIG_SENSORS_LM93=m
-CONFIG_SENSORS_LM95234=m
-CONFIG_SENSORS_LM95241=m
-CONFIG_SENSORS_LM95245=m
-CONFIG_SENSORS_PC87360=m
-CONFIG_SENSORS_PC87427=m
-CONFIG_SENSORS_NTC_THERMISTOR=m
-CONFIG_SENSORS_NCT6683=m
-CONFIG_SENSORS_NCT6775=m
-CONFIG_SENSORS_NCT7802=m
-CONFIG_SENSORS_NCT7904=m
-# CONFIG_SENSORS_NPCM7XX is not set
-CONFIG_SENSORS_PCF8591=m
-CONFIG_PMBUS=m
-CONFIG_SENSORS_PMBUS=m
-CONFIG_SENSORS_ADM1275=m
-# CONFIG_SENSORS_IBM_CFFPS is not set
-# CONFIG_SENSORS_IR35221 is not set
-CONFIG_SENSORS_LM25066=m
-CONFIG_SENSORS_LTC2978=m
-CONFIG_SENSORS_LTC3815=m
-CONFIG_SENSORS_MAX16064=m
-CONFIG_SENSORS_MAX20751=m
-# CONFIG_SENSORS_MAX31785 is not set
-CONFIG_SENSORS_MAX34440=m
-CONFIG_SENSORS_MAX8688=m
-CONFIG_SENSORS_TPS40422=m
-# CONFIG_SENSORS_TPS53679 is not set
-CONFIG_SENSORS_UCD9000=m
-CONFIG_SENSORS_UCD9200=m
-CONFIG_SENSORS_ZL6100=m
-CONFIG_SENSORS_PWM_FAN=m
-CONFIG_SENSORS_SHT15=m
-CONFIG_SENSORS_SHT21=m
-# CONFIG_SENSORS_SHT3x is not set
-CONFIG_SENSORS_SHTC1=m
-CONFIG_SENSORS_SIS5595=m
-CONFIG_SENSORS_DME1737=m
-CONFIG_SENSORS_EMC1403=m
-# CONFIG_SENSORS_EMC2103 is not set
-CONFIG_SENSORS_EMC6W201=m
-CONFIG_SENSORS_SMSC47M1=m
-CONFIG_SENSORS_SMSC47M192=m
-CONFIG_SENSORS_SMSC47B397=m
-CONFIG_SENSORS_SCH56XX_COMMON=m
-CONFIG_SENSORS_SCH5627=m
-CONFIG_SENSORS_SCH5636=m
-# CONFIG_SENSORS_STTS751 is not set
-# CONFIG_SENSORS_SMM665 is not set
-CONFIG_SENSORS_ADC128D818=m
-CONFIG_SENSORS_ADS1015=m
-CONFIG_SENSORS_ADS7828=m
-CONFIG_SENSORS_ADS7871=m
-CONFIG_SENSORS_AMC6821=m
-CONFIG_SENSORS_INA209=m
-CONFIG_SENSORS_INA2XX=m
-# CONFIG_SENSORS_INA3221 is not set
-CONFIG_SENSORS_TC74=m
-CONFIG_SENSORS_THMC50=m
-CONFIG_SENSORS_TMP102=m
-CONFIG_SENSORS_TMP103=m
-# CONFIG_SENSORS_TMP108 is not set
-CONFIG_SENSORS_TMP401=m
-CONFIG_SENSORS_TMP421=m
-CONFIG_SENSORS_VEXPRESS=m
-CONFIG_SENSORS_VIA686A=m
-CONFIG_SENSORS_VT1211=m
-CONFIG_SENSORS_VT8231=m
-# CONFIG_SENSORS_W83773G is not set
-CONFIG_SENSORS_W83781D=m
-CONFIG_SENSORS_W83791D=m
-CONFIG_SENSORS_W83792D=m
-CONFIG_SENSORS_W83793=m
-CONFIG_SENSORS_W83795=m
-# CONFIG_SENSORS_W83795_FANCTRL is not set
-CONFIG_SENSORS_W83L785TS=m
-CONFIG_SENSORS_W83L786NG=m
-CONFIG_SENSORS_W83627HF=m
-CONFIG_SENSORS_W83627EHF=m
-CONFIG_SENSORS_XGENE=m
-
-#
-# ACPI drivers
-#
-CONFIG_SENSORS_ACPI_POWER=m
-CONFIG_THERMAL=y
-# CONFIG_THERMAL_STATISTICS is not set
-CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
-CONFIG_THERMAL_HWMON=y
-CONFIG_THERMAL_OF=y
-# CONFIG_THERMAL_WRITABLE_TRIPS is not set
-CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
-# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
-# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
-# CONFIG_THERMAL_DEFAULT_GOV_POWER_ALLOCATOR is not set
-CONFIG_THERMAL_GOV_FAIR_SHARE=y
-CONFIG_THERMAL_GOV_STEP_WISE=y
-# CONFIG_THERMAL_GOV_BANG_BANG is not set
-CONFIG_THERMAL_GOV_USER_SPACE=y
-# CONFIG_THERMAL_GOV_POWER_ALLOCATOR is not set
-CONFIG_CPU_THERMAL=y
-# CONFIG_THERMAL_EMULATION is not set
-CONFIG_HISI_THERMAL=y
-# CONFIG_QORIQ_THERMAL is not set
-
-#
-# ACPI INT340X thermal drivers
-#
-
-#
-# Qualcomm thermal drivers
-#
-CONFIG_WATCHDOG=y
-CONFIG_WATCHDOG_CORE=y
-# CONFIG_WATCHDOG_NOWAYOUT is not set
-CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
-CONFIG_WATCHDOG_SYSFS=y
-
-#
-# Watchdog Device Drivers
-#
-CONFIG_SOFT_WATCHDOG=m
-CONFIG_GPIO_WATCHDOG=m
-# CONFIG_WDAT_WDT is not set
-# CONFIG_XILINX_WATCHDOG is not set
-# CONFIG_ZIIRAVE_WATCHDOG is not set
-CONFIG_ARM_SP805_WATCHDOG=m
-CONFIG_ARM_SBSA_WATCHDOG=m
-# CONFIG_CADENCE_WATCHDOG is not set
-# CONFIG_DW_WATCHDOG is not set
-# CONFIG_MAX63XX_WATCHDOG is not set
-# CONFIG_QCOM_WDT is not set
-CONFIG_ALIM7101_WDT=m
-CONFIG_I6300ESB_WDT=m
-# CONFIG_MEN_A21_WDT is not set
-
-#
-# PCI-based Watchdog Cards
-#
-CONFIG_PCIPCWATCHDOG=m
-CONFIG_WDTPCI=m
-
-#
-# USB-based Watchdog Cards
-#
-CONFIG_USBPCWATCHDOG=m
-
-#
-# Watchdog Pretimeout Governors
-#
-# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set
-CONFIG_SSB_POSSIBLE=y
-# CONFIG_SSB is not set
-CONFIG_BCMA_POSSIBLE=y
-CONFIG_BCMA=m
-CONFIG_BCMA_HOST_PCI_POSSIBLE=y
-CONFIG_BCMA_HOST_PCI=y
-# CONFIG_BCMA_HOST_SOC is not set
-CONFIG_BCMA_DRIVER_PCI=y
-CONFIG_BCMA_DRIVER_GMAC_CMN=y
-CONFIG_BCMA_DRIVER_GPIO=y
-# CONFIG_BCMA_DEBUG is not set
-
-#
-# Multifunction device drivers
-#
-CONFIG_MFD_CORE=m
-# CONFIG_MFD_ACT8945A is not set
-# CONFIG_MFD_ATMEL_FLEXCOM is not set
-# CONFIG_MFD_ATMEL_HLCDC is not set
-# CONFIG_MFD_BCM590XX is not set
-# CONFIG_MFD_BD9571MWV is not set
-# CONFIG_MFD_AXP20X_I2C is not set
-# CONFIG_MFD_CROS_EC is not set
-# CONFIG_MFD_MADERA is not set
-# CONFIG_MFD_DA9052_SPI is not set
-# CONFIG_MFD_DA9062 is not set
-# CONFIG_MFD_DA9063 is not set
-# CONFIG_MFD_DA9150 is not set
-# CONFIG_MFD_DLN2 is not set
-# CONFIG_MFD_MC13XXX_SPI is not set
-# CONFIG_MFD_MC13XXX_I2C is not set
-# CONFIG_MFD_HI6421_PMIC is not set
-# CONFIG_MFD_HI655X_PMIC is not set
-# CONFIG_HTC_PASIC3 is not set
-# CONFIG_LPC_ICH is not set
-# CONFIG_LPC_SCH is not set
-# CONFIG_MFD_JANZ_CMODIO is not set
-# CONFIG_MFD_KEMPLD is not set
-# CONFIG_MFD_88PM800 is not set
-# CONFIG_MFD_88PM805 is not set
-# CONFIG_MFD_MAX14577 is not set
-# CONFIG_MFD_MAX77686 is not set
-# CONFIG_MFD_MAX77693 is not set
-# CONFIG_MFD_MAX8907 is not set
-# CONFIG_MFD_MT6397 is not set
-# CONFIG_MFD_MENF21BMC is not set
-# CONFIG_EZX_PCAP is not set
-# CONFIG_MFD_CPCAP is not set
-# CONFIG_MFD_VIPERBOARD is not set
-# CONFIG_MFD_RETU is not set
-# CONFIG_MFD_PCF50633 is not set
-# CONFIG_MFD_QCOM_RPM is not set
-# CONFIG_MFD_RDC321X is not set
-# CONFIG_MFD_RT5033 is not set
-# CONFIG_MFD_RK808 is not set
-# CONFIG_MFD_RN5T618 is not set
-# CONFIG_MFD_SI476X_CORE is not set
-# CONFIG_MFD_SM501 is not set
-# CONFIG_MFD_SKY81452 is not set
-# CONFIG_ABX500_CORE is not set
-# CONFIG_MFD_STMPE is not set
-CONFIG_MFD_SYSCON=y
-# CONFIG_MFD_TI_AM335X_TSCADC is not set
-# CONFIG_MFD_LP3943 is not set
-# CONFIG_MFD_TI_LMU is not set
-# CONFIG_TPS6105X is not set
-# CONFIG_TPS65010 is not set
-# CONFIG_TPS6507X is not set
-# CONFIG_MFD_TPS65086 is not set
-# CONFIG_MFD_TPS65217 is not set
-# CONFIG_MFD_TI_LP873X is not set
-# CONFIG_MFD_TI_LP87565 is not set
-# CONFIG_MFD_TPS65218 is not set
-# CONFIG_MFD_TPS65912_I2C is not set
-# CONFIG_MFD_TPS65912_SPI is not set
-# CONFIG_MFD_WL1273_CORE is not set
-# CONFIG_MFD_LM3533 is not set
-# CONFIG_MFD_VX855 is not set
-# CONFIG_MFD_ARIZONA_I2C is not set
-# CONFIG_MFD_ARIZONA_SPI is not set
-# CONFIG_MFD_WM831X_SPI is not set
-# CONFIG_MFD_WM8994 is not set
-# CONFIG_MFD_VEXPRESS_SYSREG is not set
-# CONFIG_REGULATOR is not set
-# CONFIG_RC_CORE is not set
-# CONFIG_MEDIA_SUPPORT is not set
-
-#
-# Graphics support
-#
-CONFIG_VGA_ARB=y
-CONFIG_VGA_ARB_MAX_GPUS=64
-CONFIG_DRM=m
-CONFIG_DRM_DP_AUX_CHARDEV=y
-# CONFIG_DRM_DEBUG_SELFTEST is not set
-CONFIG_DRM_KMS_HELPER=m
-CONFIG_DRM_KMS_FB_HELPER=y
-CONFIG_DRM_FBDEV_EMULATION=y
-CONFIG_DRM_FBDEV_OVERALLOC=100
-# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
-CONFIG_DRM_LOAD_EDID_FIRMWARE=y
-# CONFIG_DRM_DP_CEC is not set
-CONFIG_DRM_TTM=m
-CONFIG_DRM_VM=y
-CONFIG_DRM_SCHED=m
-
-#
-# I2C encoder or helper chips
-#
-CONFIG_DRM_I2C_CH7006=m
-# CONFIG_DRM_I2C_SIL164 is not set
-CONFIG_DRM_I2C_NXP_TDA998X=m
-# CONFIG_DRM_I2C_NXP_TDA9950 is not set
-# CONFIG_DRM_HDLCD is not set
-# CONFIG_DRM_MALI_DISPLAY is not set
-CONFIG_DRM_RADEON=m
-CONFIG_DRM_RADEON_USERPTR=y
-CONFIG_DRM_AMDGPU=m
-# CONFIG_DRM_AMDGPU_SI is not set
-CONFIG_DRM_AMDGPU_CIK=y
-CONFIG_DRM_AMDGPU_USERPTR=y
-# CONFIG_DRM_AMDGPU_GART_DEBUGFS is not set
-
-#
-# ACP (Audio CoProcessor) Configuration
-#
-# CONFIG_DRM_AMD_ACP is not set
-
-#
-# Display Engine Configuration
-#
-CONFIG_DRM_AMD_DC=y
-# CONFIG_DEBUG_KERNEL_DC is not set
-
-#
-# AMD Library routines
-#
-CONFIG_CHASH=m
-# CONFIG_CHASH_STATS is not set
-# CONFIG_CHASH_SELFTEST is not set
-CONFIG_DRM_NOUVEAU=m
-CONFIG_NOUVEAU_DEBUG=5
-CONFIG_NOUVEAU_DEBUG_DEFAULT=3
-# CONFIG_NOUVEAU_DEBUG_MMU is not set
-CONFIG_DRM_NOUVEAU_BACKLIGHT=y
-# CONFIG_DRM_VGEM is not set
-# CONFIG_DRM_VKMS is not set
-CONFIG_DRM_UDL=m
-CONFIG_DRM_AST=m
-CONFIG_DRM_MGAG200=m
-CONFIG_DRM_CIRRUS_QEMU=m
-# CONFIG_DRM_RCAR_DW_HDMI is not set
-# CONFIG_DRM_RCAR_LVDS is not set
-CONFIG_DRM_QXL=m
-CONFIG_DRM_BOCHS=m
-CONFIG_DRM_VIRTIO_GPU=m
-# CONFIG_DRM_MSM is not set
-CONFIG_DRM_PANEL=y
-
-#
-# Display Panels
-#
-# CONFIG_DRM_PANEL_ARM_VERSATILE is not set
-# CONFIG_DRM_PANEL_LVDS is not set
-# CONFIG_DRM_PANEL_SIMPLE is not set
-# CONFIG_DRM_PANEL_ILITEK_IL9322 is not set
-# CONFIG_DRM_PANEL_SAMSUNG_LD9040 is not set
-# CONFIG_DRM_PANEL_LG_LG4573 is not set
-# CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0 is not set
-# CONFIG_DRM_PANEL_SEIKO_43WVF1G is not set
-# CONFIG_DRM_PANEL_SITRONIX_ST7789V is not set
-CONFIG_DRM_BRIDGE=y
-CONFIG_DRM_PANEL_BRIDGE=y
-
-#
-# Display Interface Bridges
-#
-# CONFIG_DRM_ANALOGIX_ANX78XX is not set
-# CONFIG_DRM_CDNS_DSI is not set
-# CONFIG_DRM_DUMB_VGA_DAC is not set
-# CONFIG_DRM_LVDS_ENCODER is not set
-# CONFIG_DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW is not set
-# CONFIG_DRM_NXP_PTN3460 is not set
-# CONFIG_DRM_PARADE_PS8622 is not set
-# CONFIG_DRM_SIL_SII8620 is not set
-# CONFIG_DRM_SII902X is not set
-# CONFIG_DRM_SII9234 is not set
-# CONFIG_DRM_THINE_THC63LVD1024 is not set
-# CONFIG_DRM_TOSHIBA_TC358767 is not set
-# CONFIG_DRM_TI_TFP410 is not set
-# CONFIG_DRM_I2C_ADV7511 is not set
-# CONFIG_DRM_ARCPGU is not set
-CONFIG_DRM_HISI_HIBMC=m
-# CONFIG_DRM_HISI_KIRIN is not set
-# CONFIG_DRM_MXSFB is not set
-# CONFIG_DRM_TINYDRM is not set
-# CONFIG_DRM_PL111 is not set
-# CONFIG_DRM_LEGACY is not set
-CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
-
-#
-# Frame buffer Devices
-#
-CONFIG_FB=y
-# CONFIG_FIRMWARE_EDID is not set
-CONFIG_FB_CMDLINE=y
-CONFIG_FB_NOTIFY=y
-CONFIG_FB_CFB_FILLRECT=y
-CONFIG_FB_CFB_COPYAREA=y
-CONFIG_FB_CFB_IMAGEBLIT=y
-CONFIG_FB_SYS_FILLRECT=m
-CONFIG_FB_SYS_COPYAREA=m
-CONFIG_FB_SYS_IMAGEBLIT=m
-# CONFIG_FB_FOREIGN_ENDIAN is not set
-CONFIG_FB_SYS_FOPS=m
-CONFIG_FB_DEFERRED_IO=y
-CONFIG_FB_BACKLIGHT=y
-CONFIG_FB_MODE_HELPERS=y
-CONFIG_FB_TILEBLITTING=y
-
-#
-# Frame buffer hardware drivers
-#
-# CONFIG_FB_CIRRUS is not set
-# CONFIG_FB_PM2 is not set
-CONFIG_FB_ARMCLCD=y
-# CONFIG_FB_CYBER2000 is not set
-# CONFIG_FB_ASILIANT is not set
-# CONFIG_FB_IMSTT is not set
-# CONFIG_FB_UVESA is not set
-CONFIG_FB_EFI=y
-# CONFIG_FB_OPENCORES is not set
-# CONFIG_FB_S1D13XXX is not set
-# CONFIG_FB_NVIDIA is not set
-# CONFIG_FB_RIVA is not set
-# CONFIG_FB_I740 is not set
-# CONFIG_FB_MATROX is not set
-# CONFIG_FB_RADEON is not set
-# CONFIG_FB_ATY128 is not set
-# CONFIG_FB_ATY is not set
-# CONFIG_FB_S3 is not set
-# CONFIG_FB_SAVAGE is not set
-# CONFIG_FB_SIS is not set
-# CONFIG_FB_NEOMAGIC is not set
-# CONFIG_FB_KYRO is not set
-# CONFIG_FB_3DFX is not set
-# CONFIG_FB_VOODOO1 is not set
-# CONFIG_FB_VT8623 is not set
-# CONFIG_FB_TRIDENT is not set
-# CONFIG_FB_ARK is not set
-# CONFIG_FB_PM3 is not set
-# CONFIG_FB_CARMINE is not set
-# CONFIG_FB_SMSCUFX is not set
-# CONFIG_FB_UDL is not set
-# CONFIG_FB_IBM_GXT4500 is not set
-# CONFIG_FB_VIRTUAL is not set
-# CONFIG_FB_METRONOME is not set
-# CONFIG_FB_MB862XX is not set
-# CONFIG_FB_BROADSHEET is not set
-CONFIG_FB_SIMPLE=y
-CONFIG_FB_SSD1307=m
-# CONFIG_FB_SM712 is not set
-CONFIG_BACKLIGHT_LCD_SUPPORT=y
-CONFIG_LCD_CLASS_DEVICE=m
-# CONFIG_LCD_L4F00242T03 is not set
-# CONFIG_LCD_LMS283GF05 is not set
-# CONFIG_LCD_LTV350QV is not set
-# CONFIG_LCD_ILI922X is not set
-# CONFIG_LCD_ILI9320 is not set
-# CONFIG_LCD_TDO24M is not set
-# CONFIG_LCD_VGG2432A4 is not set
-CONFIG_LCD_PLATFORM=m
-# CONFIG_LCD_S6E63M0 is not set
-# CONFIG_LCD_LD9040 is not set
-# CONFIG_LCD_AMS369FG06 is not set
-# CONFIG_LCD_LMS501KF03 is not set
-# CONFIG_LCD_HX8357 is not set
-# CONFIG_LCD_OTM3225A is not set
-CONFIG_BACKLIGHT_CLASS_DEVICE=y
-# CONFIG_BACKLIGHT_GENERIC is not set
-CONFIG_BACKLIGHT_PWM=m
-# CONFIG_BACKLIGHT_PM8941_WLED is not set
-# CONFIG_BACKLIGHT_ADP8860 is not set
-# CONFIG_BACKLIGHT_ADP8870 is not set
-# CONFIG_BACKLIGHT_LM3630A is not set
-# CONFIG_BACKLIGHT_LM3639 is not set
-CONFIG_BACKLIGHT_LP855X=m
-CONFIG_BACKLIGHT_GPIO=m
-# CONFIG_BACKLIGHT_LV5207LP is not set
-# CONFIG_BACKLIGHT_BD6107 is not set
-# CONFIG_BACKLIGHT_ARCXCNN is not set
-CONFIG_VIDEOMODE_HELPERS=y
-CONFIG_HDMI=y
-
-#
-# Console display driver support
-#
-CONFIG_DUMMY_CONSOLE=y
-CONFIG_DUMMY_CONSOLE_COLUMNS=80
-CONFIG_DUMMY_CONSOLE_ROWS=25
-CONFIG_FRAMEBUFFER_CONSOLE=y
-CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
-CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
-# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
-CONFIG_LOGO=y
-# CONFIG_LOGO_LINUX_MONO is not set
-# CONFIG_LOGO_LINUX_VGA16 is not set
-CONFIG_LOGO_LINUX_CLUT224=y
-CONFIG_SOUND=m
-# CONFIG_SND is not set
-
-#
-# HID support
-#
-CONFIG_HID=y
-CONFIG_HID_BATTERY_STRENGTH=y
-CONFIG_HIDRAW=y
-CONFIG_UHID=m
-CONFIG_HID_GENERIC=y
-
-#
-# Special HID drivers
-#
-CONFIG_HID_A4TECH=y
-# CONFIG_HID_ACCUTOUCH is not set
-CONFIG_HID_ACRUX=m
-# CONFIG_HID_ACRUX_FF is not set
-CONFIG_HID_APPLE=y
-CONFIG_HID_APPLEIR=m
-# CONFIG_HID_ASUS is not set
-CONFIG_HID_AUREAL=m
-CONFIG_HID_BELKIN=y
-CONFIG_HID_BETOP_FF=m
-CONFIG_HID_CHERRY=y
-CONFIG_HID_CHICONY=y
-CONFIG_HID_CORSAIR=m
-# CONFIG_HID_COUGAR is not set
-# CONFIG_HID_CMEDIA is not set
-# CONFIG_HID_CP2112 is not set
-CONFIG_HID_CYPRESS=y
-CONFIG_HID_DRAGONRISE=m
-# CONFIG_DRAGONRISE_FF is not set
-# CONFIG_HID_EMS_FF is not set
-# CONFIG_HID_ELAN is not set
-CONFIG_HID_ELECOM=m
-CONFIG_HID_ELO=m
-CONFIG_HID_EZKEY=y
-CONFIG_HID_GEMBIRD=m
-CONFIG_HID_GFRM=m
-CONFIG_HID_HOLTEK=m
-# CONFIG_HOLTEK_FF is not set
-# CONFIG_HID_GOOGLE_HAMMER is not set
-CONFIG_HID_GT683R=m
-CONFIG_HID_KEYTOUCH=m
-CONFIG_HID_KYE=m
-CONFIG_HID_UCLOGIC=m
-CONFIG_HID_WALTOP=m
-CONFIG_HID_GYRATION=m
-CONFIG_HID_ICADE=m
-CONFIG_HID_ITE=y
-# CONFIG_HID_JABRA is not set
-CONFIG_HID_TWINHAN=m
-CONFIG_HID_KENSINGTON=y
-CONFIG_HID_LCPOWER=m
-CONFIG_HID_LED=m
-CONFIG_HID_LENOVO=m
-CONFIG_HID_LOGITECH=y
-CONFIG_HID_LOGITECH_DJ=m
-CONFIG_HID_LOGITECH_HIDPP=m
-# CONFIG_LOGITECH_FF is not set
-# CONFIG_LOGIRUMBLEPAD2_FF is not set
-# CONFIG_LOGIG940_FF is not set
-# CONFIG_LOGIWHEELS_FF is not set
-CONFIG_HID_MAGICMOUSE=y
-# CONFIG_HID_MAYFLASH is not set
-# CONFIG_HID_REDRAGON is not set
-CONFIG_HID_MICROSOFT=y
-CONFIG_HID_MONTEREY=y
-CONFIG_HID_MULTITOUCH=m
-# CONFIG_HID_NTI is not set
-CONFIG_HID_NTRIG=y
-CONFIG_HID_ORTEK=m
-CONFIG_HID_PANTHERLORD=m
-# CONFIG_PANTHERLORD_FF is not set
-CONFIG_HID_PENMOUNT=m
-CONFIG_HID_PETALYNX=m
-CONFIG_HID_PICOLCD=m
-CONFIG_HID_PICOLCD_FB=y
-CONFIG_HID_PICOLCD_BACKLIGHT=y
-CONFIG_HID_PICOLCD_LCD=y
-CONFIG_HID_PICOLCD_LEDS=y
-CONFIG_HID_PLANTRONICS=m
-CONFIG_HID_PRIMAX=m
-# CONFIG_HID_RETRODE is not set
-CONFIG_HID_ROCCAT=m
-CONFIG_HID_SAITEK=m
-CONFIG_HID_SAMSUNG=m
-CONFIG_HID_SONY=m
-CONFIG_SONY_FF=y
-CONFIG_HID_SPEEDLINK=m
-# CONFIG_HID_STEAM is not set
-CONFIG_HID_STEELSERIES=m
-CONFIG_HID_SUNPLUS=m
-CONFIG_HID_RMI=m
-CONFIG_HID_GREENASIA=m
-# CONFIG_GREENASIA_FF is not set
-CONFIG_HID_SMARTJOYPLUS=m
-# CONFIG_SMARTJOYPLUS_FF is not set
-CONFIG_HID_TIVO=m
-CONFIG_HID_TOPSEED=m
-CONFIG_HID_THINGM=m
-CONFIG_HID_THRUSTMASTER=m
-# CONFIG_THRUSTMASTER_FF is not set
-# CONFIG_HID_UDRAW_PS3 is not set
-CONFIG_HID_WACOM=m
-CONFIG_HID_WIIMOTE=m
-CONFIG_HID_XINMO=m
-CONFIG_HID_ZEROPLUS=m
-# CONFIG_ZEROPLUS_FF is not set
-CONFIG_HID_ZYDACRON=m
-CONFIG_HID_SENSOR_HUB=m
-# CONFIG_HID_SENSOR_CUSTOM_SENSOR is not set
-# CONFIG_HID_ALPS is not set
-
-#
-# USB HID support
-#
-CONFIG_USB_HID=y
-CONFIG_HID_PID=y
-CONFIG_USB_HIDDEV=y
-
-#
-# I2C HID support
-#
-CONFIG_I2C_HID=m
-CONFIG_USB_OHCI_LITTLE_ENDIAN=y
-CONFIG_USB_SUPPORT=y
-CONFIG_USB_COMMON=y
-CONFIG_USB_ARCH_HAS_HCD=y
-CONFIG_USB=y
-CONFIG_USB_PCI=y
-CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
-
-#
-# Miscellaneous USB options
-#
-CONFIG_USB_DEFAULT_PERSIST=y
-# CONFIG_USB_DYNAMIC_MINORS is not set
-# CONFIG_USB_OTG is not set
-# CONFIG_USB_OTG_WHITELIST is not set
-# CONFIG_USB_OTG_BLACKLIST_HUB is not set
-CONFIG_USB_LEDS_TRIGGER_USBPORT=m
-CONFIG_USB_MON=y
-CONFIG_USB_WUSB=m
-CONFIG_USB_WUSB_CBAF=m
-# CONFIG_USB_WUSB_CBAF_DEBUG is not set
-
-#
-# USB Host Controller Drivers
-#
-# CONFIG_USB_C67X00_HCD is not set
-CONFIG_USB_XHCI_HCD=y
-# CONFIG_USB_XHCI_DBGCAP is not set
-CONFIG_USB_XHCI_PCI=y
-CONFIG_USB_XHCI_PLATFORM=m
-# CONFIG_USB_XHCI_HISTB is not set
-CONFIG_USB_EHCI_HCD=y
-CONFIG_USB_EHCI_ROOT_HUB_TT=y
-CONFIG_USB_EHCI_TT_NEWSCHED=y
-CONFIG_USB_EHCI_PCI=y
-CONFIG_USB_EHCI_HCD_PLATFORM=y
-# CONFIG_USB_OXU210HP_HCD is not set
-# CONFIG_USB_ISP116X_HCD is not set
-# CONFIG_USB_FOTG210_HCD is not set
-# CONFIG_USB_MAX3421_HCD is not set
-CONFIG_USB_OHCI_HCD=y
-CONFIG_USB_OHCI_HCD_PCI=y
-# CONFIG_USB_OHCI_HCD_PLATFORM is not set
-CONFIG_USB_UHCI_HCD=y
-# CONFIG_USB_U132_HCD is not set
-# CONFIG_USB_SL811_HCD is not set
-# CONFIG_USB_R8A66597_HCD is not set
-# CONFIG_USB_WHCI_HCD is not set
-CONFIG_USB_HWA_HCD=m
-# CONFIG_USB_HCD_BCMA is not set
-# CONFIG_USB_HCD_TEST_MODE is not set
-
-#
-# USB Device Class drivers
-#
-CONFIG_USB_ACM=m
-CONFIG_USB_PRINTER=m
-CONFIG_USB_WDM=m
-CONFIG_USB_TMC=m
-
-#
-# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
-#
-
-#
-# also be needed; see USB_STORAGE Help for more info
-#
-CONFIG_USB_STORAGE=m
-# CONFIG_USB_STORAGE_DEBUG is not set
-CONFIG_USB_STORAGE_REALTEK=m
-CONFIG_REALTEK_AUTOPM=y
-CONFIG_USB_STORAGE_DATAFAB=m
-CONFIG_USB_STORAGE_FREECOM=m
-CONFIG_USB_STORAGE_ISD200=m
-CONFIG_USB_STORAGE_USBAT=m
-CONFIG_USB_STORAGE_SDDR09=m
-CONFIG_USB_STORAGE_SDDR55=m
-CONFIG_USB_STORAGE_JUMPSHOT=m
-CONFIG_USB_STORAGE_ALAUDA=m
-CONFIG_USB_STORAGE_ONETOUCH=m
-CONFIG_USB_STORAGE_KARMA=m
-CONFIG_USB_STORAGE_CYPRESS_ATACB=m
-CONFIG_USB_STORAGE_ENE_UB6250=m
-CONFIG_USB_UAS=m
-
-#
-# USB Imaging devices
-#
-CONFIG_USB_MDC800=m
-CONFIG_USB_MICROTEK=m
-# CONFIG_USBIP_CORE is not set
-# CONFIG_USB_MUSB_HDRC is not set
-# CONFIG_USB_DWC3 is not set
-# CONFIG_USB_DWC2 is not set
-# CONFIG_USB_CHIPIDEA is not set
-# CONFIG_USB_ISP1760 is not set
-
-#
-# USB port drivers
-#
-CONFIG_USB_SERIAL=m
-CONFIG_USB_SERIAL_GENERIC=y
-CONFIG_USB_SERIAL_SIMPLE=m
-CONFIG_USB_SERIAL_AIRCABLE=m
-CONFIG_USB_SERIAL_ARK3116=m
-CONFIG_USB_SERIAL_BELKIN=m
-CONFIG_USB_SERIAL_CH341=m
-CONFIG_USB_SERIAL_WHITEHEAT=m
-CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
-CONFIG_USB_SERIAL_CP210X=m
-CONFIG_USB_SERIAL_CYPRESS_M8=m
-CONFIG_USB_SERIAL_EMPEG=m
-CONFIG_USB_SERIAL_FTDI_SIO=m
-CONFIG_USB_SERIAL_VISOR=m
-CONFIG_USB_SERIAL_IPAQ=m
-CONFIG_USB_SERIAL_IR=m
-CONFIG_USB_SERIAL_EDGEPORT=m
-CONFIG_USB_SERIAL_EDGEPORT_TI=m
-# CONFIG_USB_SERIAL_F81232 is not set
-# CONFIG_USB_SERIAL_F8153X is not set
-CONFIG_USB_SERIAL_GARMIN=m
-CONFIG_USB_SERIAL_IPW=m
-CONFIG_USB_SERIAL_IUU=m
-CONFIG_USB_SERIAL_KEYSPAN_PDA=m
-CONFIG_USB_SERIAL_KEYSPAN=m
-CONFIG_USB_SERIAL_KLSI=m
-CONFIG_USB_SERIAL_KOBIL_SCT=m
-CONFIG_USB_SERIAL_MCT_U232=m
-# CONFIG_USB_SERIAL_METRO is not set
-CONFIG_USB_SERIAL_MOS7720=m
-CONFIG_USB_SERIAL_MOS7840=m
-# CONFIG_USB_SERIAL_MXUPORT is not set
-CONFIG_USB_SERIAL_NAVMAN=m
-CONFIG_USB_SERIAL_PL2303=m
-CONFIG_USB_SERIAL_OTI6858=m
-CONFIG_USB_SERIAL_QCAUX=m
-CONFIG_USB_SERIAL_QUALCOMM=m
-CONFIG_USB_SERIAL_SPCP8X5=m
-CONFIG_USB_SERIAL_SAFE=m
-CONFIG_USB_SERIAL_SAFE_PADDED=y
-CONFIG_USB_SERIAL_SIERRAWIRELESS=m
-CONFIG_USB_SERIAL_SYMBOL=m
-CONFIG_USB_SERIAL_TI=m
-CONFIG_USB_SERIAL_CYBERJACK=m
-CONFIG_USB_SERIAL_XIRCOM=m
-CONFIG_USB_SERIAL_WWAN=m
-CONFIG_USB_SERIAL_OPTION=m
-CONFIG_USB_SERIAL_OMNINET=m
-CONFIG_USB_SERIAL_OPTICON=m
-CONFIG_USB_SERIAL_XSENS_MT=m
-# CONFIG_USB_SERIAL_WISHBONE is not set
-CONFIG_USB_SERIAL_SSU100=m
-CONFIG_USB_SERIAL_QT2=m
-# CONFIG_USB_SERIAL_UPD78F0730 is not set
-CONFIG_USB_SERIAL_DEBUG=m
-
-#
-# USB Miscellaneous drivers
-#
-CONFIG_USB_EMI62=m
-CONFIG_USB_EMI26=m
-CONFIG_USB_ADUTUX=m
-CONFIG_USB_SEVSEG=m
-# CONFIG_USB_RIO500 is not set
-CONFIG_USB_LEGOTOWER=m
-CONFIG_USB_LCD=m
-# CONFIG_USB_CYPRESS_CY7C63 is not set
-# CONFIG_USB_CYTHERM is not set
-CONFIG_USB_IDMOUSE=m
-CONFIG_USB_FTDI_ELAN=m
-CONFIG_USB_APPLEDISPLAY=m
-CONFIG_USB_SISUSBVGA=m
-CONFIG_USB_SISUSBVGA_CON=y
-CONFIG_USB_LD=m
-# CONFIG_USB_TRANCEVIBRATOR is not set
-CONFIG_USB_IOWARRIOR=m
-# CONFIG_USB_TEST is not set
-# CONFIG_USB_EHSET_TEST_FIXTURE is not set
-CONFIG_USB_ISIGHTFW=m
-# CONFIG_USB_YUREX is not set
-CONFIG_USB_EZUSB_FX2=m
-# CONFIG_USB_HUB_USB251XB is not set
-CONFIG_USB_HSIC_USB3503=m
-# CONFIG_USB_HSIC_USB4604 is not set
-# CONFIG_USB_LINK_LAYER_TEST is not set
-CONFIG_USB_CHAOSKEY=m
-CONFIG_USB_ATM=m
-# CONFIG_USB_SPEEDTOUCH is not set
-CONFIG_USB_CXACRU=m
-CONFIG_USB_UEAGLEATM=m
-CONFIG_USB_XUSBATM=m
-
-#
-# USB Physical Layer drivers
-#
-# CONFIG_NOP_USB_XCEIV is not set
-# CONFIG_USB_GPIO_VBUS is not set
-# CONFIG_USB_ISP1301 is not set
-# CONFIG_USB_ULPI is not set
-# CONFIG_USB_GADGET is not set
-CONFIG_TYPEC=y
-# CONFIG_TYPEC_TCPM is not set
-CONFIG_TYPEC_UCSI=y
-CONFIG_UCSI_ACPI=y
-# CONFIG_TYPEC_TPS6598X is not set
-
-#
-# USB Type-C Multiplexer/DeMultiplexer Switch support
-#
-# CONFIG_TYPEC_MUX_PI3USB30532 is not set
-
-#
-# USB Type-C Alternate Mode drivers
-#
-# CONFIG_TYPEC_DP_ALTMODE is not set
-# CONFIG_USB_ROLE_SWITCH is not set
-CONFIG_USB_LED_TRIG=y
-CONFIG_USB_ULPI_BUS=m
-CONFIG_UWB=m
-CONFIG_UWB_HWA=m
-CONFIG_UWB_WHCI=m
-CONFIG_UWB_I1480U=m
-CONFIG_MMC=m
-CONFIG_PWRSEQ_EMMC=m
-CONFIG_PWRSEQ_SIMPLE=m
-CONFIG_MMC_BLOCK=m
-CONFIG_MMC_BLOCK_MINORS=8
-CONFIG_SDIO_UART=m
-# CONFIG_MMC_TEST is not set
-
-#
-# MMC/SD/SDIO Host Controller Drivers
-#
-# CONFIG_MMC_DEBUG is not set
-CONFIG_MMC_ARMMMCI=m
-CONFIG_MMC_SDHCI=m
-CONFIG_MMC_SDHCI_PCI=m
-CONFIG_MMC_RICOH_MMC=y
-CONFIG_MMC_SDHCI_ACPI=m
-CONFIG_MMC_SDHCI_PLTFM=m
-# CONFIG_MMC_SDHCI_OF_ARASAN is not set
-# CONFIG_MMC_SDHCI_OF_AT91 is not set
-# CONFIG_MMC_SDHCI_OF_DWCMSHC is not set
-CONFIG_MMC_SDHCI_CADENCE=m
-# CONFIG_MMC_SDHCI_F_SDH30 is not set
-# CONFIG_MMC_SDHCI_MSM is not set
-CONFIG_MMC_TIFM_SD=m
-CONFIG_MMC_SPI=m
-CONFIG_MMC_CB710=m
-CONFIG_MMC_VIA_SDMMC=m
-CONFIG_MMC_DW=m
-CONFIG_MMC_DW_PLTFM=m
-CONFIG_MMC_DW_BLUEFIELD=m
-# CONFIG_MMC_DW_EXYNOS is not set
-# CONFIG_MMC_DW_HI3798CV200 is not set
-# CONFIG_MMC_DW_K3 is not set
-# CONFIG_MMC_DW_PCI is not set
-CONFIG_MMC_VUB300=m
-CONFIG_MMC_USHC=m
-# CONFIG_MMC_USDHI6ROL0 is not set
-CONFIG_MMC_CQHCI=m
-CONFIG_MMC_TOSHIBA_PCI=m
-CONFIG_MMC_MTK=m
-CONFIG_MMC_SDHCI_XENON=m
-# CONFIG_MMC_SDHCI_OMAP is not set
-CONFIG_MEMSTICK=m
-# CONFIG_MEMSTICK_DEBUG is not set
-
-#
-# MemoryStick drivers
-#
-# CONFIG_MEMSTICK_UNSAFE_RESUME is not set
-CONFIG_MSPRO_BLOCK=m
-# CONFIG_MS_BLOCK is not set
-
-#
-# MemoryStick Host Controller Drivers
-#
-CONFIG_MEMSTICK_TIFM_MS=m
-CONFIG_MEMSTICK_JMICRON_38X=m
-CONFIG_MEMSTICK_R592=m
-CONFIG_NEW_LEDS=y
-CONFIG_LEDS_CLASS=y
-CONFIG_LEDS_CLASS_FLASH=m
-# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set
-
-#
-# LED drivers
-#
-# CONFIG_LEDS_AAT1290 is not set
-# CONFIG_LEDS_AS3645A is not set
-# CONFIG_LEDS_BCM6328 is not set
-# CONFIG_LEDS_BCM6358 is not set
-# CONFIG_LEDS_CR0014114 is not set
-CONFIG_LEDS_LM3530=m
-# CONFIG_LEDS_LM3642 is not set
-# CONFIG_LEDS_LM3692X is not set
-# CONFIG_LEDS_LM3601X is not set
-# CONFIG_LEDS_PCA9532 is not set
-# CONFIG_LEDS_GPIO is not set
-CONFIG_LEDS_LP3944=m
-# CONFIG_LEDS_LP3952 is not set
-# CONFIG_LEDS_LP5521 is not set
-# CONFIG_LEDS_LP5523 is not set
-# CONFIG_LEDS_LP5562 is not set
-# CONFIG_LEDS_LP8501 is not set
-# CONFIG_LEDS_LP8860 is not set
-# CONFIG_LEDS_PCA955X is not set
-# CONFIG_LEDS_PCA963X is not set
-# CONFIG_LEDS_DAC124S085 is not set
-# CONFIG_LEDS_PWM is not set
-# CONFIG_LEDS_BD2802 is not set
-CONFIG_LEDS_LT3593=m
-# CONFIG_LEDS_TCA6507 is not set
-# CONFIG_LEDS_TLC591XX is not set
-# CONFIG_LEDS_LM355x is not set
-# CONFIG_LEDS_KTD2692 is not set
-# CONFIG_LEDS_IS31FL319X is not set
-# CONFIG_LEDS_IS31FL32XX is not set
-
-#
-# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
-#
-CONFIG_LEDS_BLINKM=m
-# CONFIG_LEDS_SYSCON is not set
-# CONFIG_LEDS_MLXREG is not set
-# CONFIG_LEDS_USER is not set
-
-#
-# LED Triggers
-#
-CONFIG_LEDS_TRIGGERS=y
-CONFIG_LEDS_TRIGGER_TIMER=m
-CONFIG_LEDS_TRIGGER_ONESHOT=m
-# CONFIG_LEDS_TRIGGER_DISK is not set
-# CONFIG_LEDS_TRIGGER_MTD is not set
-CONFIG_LEDS_TRIGGER_HEARTBEAT=m
-CONFIG_LEDS_TRIGGER_BACKLIGHT=m
-# CONFIG_LEDS_TRIGGER_CPU is not set
-# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
-CONFIG_LEDS_TRIGGER_GPIO=m
-CONFIG_LEDS_TRIGGER_DEFAULT_ON=m
-
-#
-# iptables trigger is under Netfilter config (LED target)
-#
-CONFIG_LEDS_TRIGGER_TRANSIENT=m
-CONFIG_LEDS_TRIGGER_CAMERA=m
-# CONFIG_LEDS_TRIGGER_PANIC is not set
-# CONFIG_LEDS_TRIGGER_NETDEV is not set
-# CONFIG_ACCESSIBILITY is not set
-CONFIG_INFINIBAND=m
-CONFIG_INFINIBAND_USER_MAD=m
-CONFIG_INFINIBAND_USER_ACCESS=m
-# CONFIG_INFINIBAND_EXP_LEGACY_VERBS_NEW_UAPI is not set
-CONFIG_INFINIBAND_USER_MEM=y
-CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
-CONFIG_INFINIBAND_ADDR_TRANS=y
-CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
-# CONFIG_INFINIBAND_MTHCA is not set
-# CONFIG_INFINIBAND_QIB is not set
-CONFIG_INFINIBAND_CXGB4=m
-CONFIG_INFINIBAND_I40IW=m
-CONFIG_MLX4_INFINIBAND=m
-CONFIG_MLX5_INFINIBAND=m
-# CONFIG_INFINIBAND_NES is not set
-# CONFIG_INFINIBAND_OCRDMA is not set
-CONFIG_INFINIBAND_HNS=m
-CONFIG_INFINIBAND_HNS_HIP06=m
-CONFIG_INFINIBAND_HNS_HIP08=m
-CONFIG_INFINIBAND_IPOIB=m
-CONFIG_INFINIBAND_IPOIB_CM=y
-CONFIG_INFINIBAND_IPOIB_DEBUG=y
-# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set
-CONFIG_INFINIBAND_SRP=m
-CONFIG_INFINIBAND_SRPT=m
-CONFIG_INFINIBAND_ISER=m
-CONFIG_INFINIBAND_ISERT=m
-CONFIG_INFINIBAND_RDMAVT=m
-CONFIG_RDMA_RXE=m
-CONFIG_INFINIBAND_QEDR=m
-CONFIG_INFINIBAND_BNXT_RE=m
-CONFIG_EDAC_SUPPORT=y
-CONFIG_EDAC=y
-CONFIG_EDAC_LEGACY_SYSFS=y
-# CONFIG_EDAC_DEBUG is not set
-CONFIG_EDAC_GHES=y
-CONFIG_EDAC_THUNDERX=m
-CONFIG_EDAC_XGENE=m
-CONFIG_RTC_LIB=y
-CONFIG_RTC_CLASS=y
-CONFIG_RTC_HCTOSYS=y
-CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
-# CONFIG_RTC_SYSTOHC is not set
-# CONFIG_RTC_DEBUG is not set
-CONFIG_RTC_NVMEM=y
-
-#
-# RTC interfaces
-#
-CONFIG_RTC_INTF_SYSFS=y
-CONFIG_RTC_INTF_PROC=y
-CONFIG_RTC_INTF_DEV=y
-# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
-# CONFIG_RTC_DRV_TEST is not set
-
-#
-# I2C RTC drivers
-#
-CONFIG_RTC_DRV_ABB5ZES3=m
-CONFIG_RTC_DRV_ABX80X=m
-CONFIG_RTC_DRV_DS1307=m
-# CONFIG_RTC_DRV_DS1307_CENTURY is not set
-CONFIG_RTC_DRV_DS1374=m
-CONFIG_RTC_DRV_DS1374_WDT=y
-CONFIG_RTC_DRV_DS1672=m
-# CONFIG_RTC_DRV_HYM8563 is not set
-CONFIG_RTC_DRV_MAX6900=m
-CONFIG_RTC_DRV_RS5C372=m
-CONFIG_RTC_DRV_ISL1208=m
-CONFIG_RTC_DRV_ISL12022=m
-# CONFIG_RTC_DRV_ISL12026 is not set
-CONFIG_RTC_DRV_X1205=m
-CONFIG_RTC_DRV_PCF8523=m
-CONFIG_RTC_DRV_PCF85063=m
-# CONFIG_RTC_DRV_PCF85363 is not set
-CONFIG_RTC_DRV_PCF8563=m
-CONFIG_RTC_DRV_PCF8583=m
-CONFIG_RTC_DRV_M41T80=m
-CONFIG_RTC_DRV_M41T80_WDT=y
-CONFIG_RTC_DRV_BQ32K=m
-# CONFIG_RTC_DRV_S35390A is not set
-CONFIG_RTC_DRV_FM3130=m
-CONFIG_RTC_DRV_RX8010=m
-CONFIG_RTC_DRV_RX8581=m
-CONFIG_RTC_DRV_RX8025=m
-CONFIG_RTC_DRV_EM3027=m
-CONFIG_RTC_DRV_RV8803=m
-
-#
-# SPI RTC drivers
-#
-CONFIG_RTC_DRV_M41T93=m
-CONFIG_RTC_DRV_M41T94=m
-# CONFIG_RTC_DRV_DS1302 is not set
-CONFIG_RTC_DRV_DS1305=m
-CONFIG_RTC_DRV_DS1343=m
-CONFIG_RTC_DRV_DS1347=m
-CONFIG_RTC_DRV_DS1390=m
-# CONFIG_RTC_DRV_MAX6916 is not set
-CONFIG_RTC_DRV_R9701=m
-CONFIG_RTC_DRV_RX4581=m
-# CONFIG_RTC_DRV_RX6110 is not set
-CONFIG_RTC_DRV_RS5C348=m
-CONFIG_RTC_DRV_MAX6902=m
-CONFIG_RTC_DRV_PCF2123=m
-CONFIG_RTC_DRV_MCP795=m
-CONFIG_RTC_I2C_AND_SPI=m
-
-#
-# SPI and I2C RTC drivers
-#
-CONFIG_RTC_DRV_DS3232=m
-CONFIG_RTC_DRV_DS3232_HWMON=y
-CONFIG_RTC_DRV_PCF2127=m
-CONFIG_RTC_DRV_RV3029C2=m
-# CONFIG_RTC_DRV_RV3029_HWMON is not set
-
-#
-# Platform RTC drivers
-#
-CONFIG_RTC_DRV_DS1286=m
-CONFIG_RTC_DRV_DS1511=m
-CONFIG_RTC_DRV_DS1553=m
-CONFIG_RTC_DRV_DS1685_FAMILY=m
-CONFIG_RTC_DRV_DS1685=y
-# CONFIG_RTC_DRV_DS1689 is not set
-# CONFIG_RTC_DRV_DS17285 is not set
-# CONFIG_RTC_DRV_DS17485 is not set
-# CONFIG_RTC_DRV_DS17885 is not set
-# CONFIG_RTC_DS1685_PROC_REGS is not set
-CONFIG_RTC_DRV_DS1742=m
-CONFIG_RTC_DRV_DS2404=m
-CONFIG_RTC_DRV_EFI=y
-CONFIG_RTC_DRV_STK17TA8=m
-# CONFIG_RTC_DRV_M48T86 is not set
-CONFIG_RTC_DRV_M48T35=m
-CONFIG_RTC_DRV_M48T59=m
-CONFIG_RTC_DRV_MSM6242=m
-CONFIG_RTC_DRV_BQ4802=m
-CONFIG_RTC_DRV_RP5C01=m
-CONFIG_RTC_DRV_V3020=m
-# CONFIG_RTC_DRV_ZYNQMP is not set
-
-#
-# on-CPU RTC drivers
-#
-# CONFIG_RTC_DRV_PL030 is not set
-CONFIG_RTC_DRV_PL031=y
-# CONFIG_RTC_DRV_FTRTC010 is not set
-# CONFIG_RTC_DRV_SNVS is not set
-# CONFIG_RTC_DRV_XGENE is not set
-# CONFIG_RTC_DRV_R7301 is not set
-
-#
-# HID Sensor RTC drivers
-#
-# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
-CONFIG_DMADEVICES=y
-# CONFIG_DMADEVICES_DEBUG is not set
-
-#
-# DMA Devices
-#
-CONFIG_DMA_ENGINE=y
-CONFIG_DMA_ACPI=y
-CONFIG_DMA_OF=y
-# CONFIG_ALTERA_MSGDMA is not set
-# CONFIG_AMBA_PL08X is not set
-# CONFIG_BCM_SBA_RAID is not set
-# CONFIG_DW_AXI_DMAC is not set
-# CONFIG_FSL_EDMA is not set
-# CONFIG_INTEL_IDMA64 is not set
-# CONFIG_K3_DMA is not set
-# CONFIG_MV_XOR_V2 is not set
-# CONFIG_PL330_DMA is not set
-# CONFIG_XGENE_DMA is not set
-# CONFIG_XILINX_DMA is not set
-# CONFIG_XILINX_ZYNQMP_DMA is not set
-# CONFIG_QCOM_BAM_DMA is not set
-CONFIG_QCOM_HIDMA_MGMT=m
-CONFIG_QCOM_HIDMA=m
-CONFIG_DW_DMAC_CORE=m
-CONFIG_DW_DMAC=m
-CONFIG_DW_DMAC_PCI=m
-
-#
-# DMA Clients
-#
-CONFIG_ASYNC_TX_DMA=y
-# CONFIG_DMATEST is not set
-
-#
-# DMABUF options
-#
-CONFIG_SYNC_FILE=y
-# CONFIG_SW_SYNC is not set
-CONFIG_AUXDISPLAY=y
-# CONFIG_HD44780 is not set
-# CONFIG_IMG_ASCII_LCD is not set
-# CONFIG_HT16K33 is not set
-CONFIG_UIO=m
-CONFIG_UIO_CIF=m
-CONFIG_UIO_PDRV_GENIRQ=m
-# CONFIG_UIO_DMEM_GENIRQ is not set
-CONFIG_UIO_AEC=m
-CONFIG_UIO_SERCOS3=m
-CONFIG_UIO_PCI_GENERIC=m
-# CONFIG_UIO_NETX is not set
-# CONFIG_UIO_PRUSS is not set
-# CONFIG_UIO_MF624 is not set
-CONFIG_VFIO_IOMMU_TYPE1=m
-CONFIG_VFIO_VIRQFD=m
-CONFIG_VFIO=m
-CONFIG_VFIO_NOIOMMU=y
-CONFIG_VFIO_PCI=m
-CONFIG_VFIO_PCI_MMAP=y
-CONFIG_VFIO_PCI_INTX=y
-CONFIG_VFIO_PLATFORM=m
-# CONFIG_VFIO_AMBA is not set
-# CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET is not set
-# CONFIG_VFIO_PLATFORM_AMDXGBE_RESET is not set
-CONFIG_VFIO_MDEV=m
-CONFIG_VFIO_MDEV_DEVICE=m
-CONFIG_VFIO_SPIMDEV=m
-# CONFIG_VIRT_DRIVERS is not set
-CONFIG_VIRTIO=m
-CONFIG_VIRTIO_MENU=y
-CONFIG_VIRTIO_PCI=m
-CONFIG_VIRTIO_PCI_LEGACY=y
-CONFIG_VIRTIO_BALLOON=m
-CONFIG_VIRTIO_INPUT=m
-CONFIG_VIRTIO_MMIO=m
-# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set
-
-#
-# Microsoft Hyper-V guest support
-#
-# CONFIG_STAGING is not set
-# CONFIG_GOLDFISH is not set
-CONFIG_CHROME_PLATFORMS=y
-# CONFIG_CHROMEOS_TBMC is not set
-# CONFIG_CROS_KBD_LED_BACKLIGHT is not set
-CONFIG_CLKDEV_LOOKUP=y
-CONFIG_HAVE_CLK_PREPARE=y
-CONFIG_COMMON_CLK=y
-
-#
-# Common Clock Framework
-#
-CONFIG_COMMON_CLK_VERSATILE=y
-CONFIG_CLK_SP810=y
-CONFIG_CLK_VEXPRESS_OSC=y
-# CONFIG_CLK_HSDK is not set
-# CONFIG_COMMON_CLK_MAX9485 is not set
-CONFIG_COMMON_CLK_SCPI=m
-# CONFIG_COMMON_CLK_SI5351 is not set
-# CONFIG_COMMON_CLK_SI514 is not set
-# CONFIG_COMMON_CLK_SI544 is not set
-# CONFIG_COMMON_CLK_SI570 is not set
-# CONFIG_COMMON_CLK_CDCE706 is not set
-# CONFIG_COMMON_CLK_CDCE925 is not set
-# CONFIG_COMMON_CLK_CS2000_CP is not set
-# CONFIG_CLK_QORIQ is not set
-CONFIG_COMMON_CLK_XGENE=y
-# CONFIG_COMMON_CLK_PWM is not set
-# CONFIG_COMMON_CLK_VC5 is not set
-CONFIG_COMMON_CLK_HI3516CV300=y
-CONFIG_COMMON_CLK_HI3519=y
-CONFIG_COMMON_CLK_HI3660=y
-CONFIG_COMMON_CLK_HI3798CV200=y
-# CONFIG_COMMON_CLK_HI6220 is not set
-CONFIG_RESET_HISI=y
-CONFIG_STUB_CLK_HI3660=y
-# CONFIG_COMMON_CLK_QCOM is not set
-CONFIG_HWSPINLOCK=y
-# CONFIG_HWSPINLOCK_QCOM is not set
-
-#
-# Clock Source drivers
-#
-CONFIG_TIMER_OF=y
-CONFIG_TIMER_ACPI=y
-CONFIG_TIMER_PROBE=y
-CONFIG_CLKSRC_MMIO=y
-CONFIG_ARM_ARCH_TIMER=y
-CONFIG_ARM_ARCH_TIMER_EVTSTREAM=y
-CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND=y
-CONFIG_FSL_ERRATUM_A008585=y
-CONFIG_HISILICON_ERRATUM_161010101=y
-CONFIG_ARM64_ERRATUM_858921=y
-CONFIG_ARM_TIMER_SP804=y
-CONFIG_MAILBOX=y
-CONFIG_ARM_MHU=m
-# CONFIG_PLATFORM_MHU is not set
-# CONFIG_PL320_MBOX is not set
-CONFIG_PCC=y
-# CONFIG_ALTERA_MBOX is not set
-CONFIG_HI3660_MBOX=y
-CONFIG_HI6220_MBOX=y
-# CONFIG_MAILBOX_TEST is not set
-# CONFIG_QCOM_APCS_IPC is not set
-CONFIG_XGENE_SLIMPRO_MBOX=m
-CONFIG_IOMMU_API=y
-CONFIG_IOMMU_SUPPORT=y
-
-#
-# Generic IOMMU Pagetable Support
-#
-CONFIG_IOMMU_IO_PGTABLE=y
-CONFIG_IOMMU_IO_PGTABLE_LPAE=y
-# CONFIG_IOMMU_IO_PGTABLE_LPAE_SELFTEST is not set
-# CONFIG_IOMMU_IO_PGTABLE_ARMV7S is not set
-# CONFIG_IOMMU_DEBUGFS is not set
-# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
-CONFIG_IOMMU_IOVA=y
-CONFIG_OF_IOMMU=y
-CONFIG_IOMMU_DMA=y
-CONFIG_ARM_SMMU=y
-CONFIG_ARM_SMMU_V3=y
-# CONFIG_QCOM_IOMMU is not set
-
-#
-# Remoteproc drivers
-#
-# CONFIG_REMOTEPROC is not set
-
-#
-# Rpmsg drivers
-#
-# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
-# CONFIG_RPMSG_VIRTIO is not set
-# CONFIG_SOUNDWIRE is not set
-
-#
-# SOC (System On Chip) specific Drivers
-#
-
-#
-# Amlogic SoC drivers
-#
-
-#
-# Broadcom SoC drivers
-#
-# CONFIG_SOC_BRCMSTB is not set
-
-#
-# NXP/Freescale QorIQ SoC drivers
-#
-
-#
-# i.MX SoC drivers
-#
-
-#
-# Qualcomm SoC drivers
-#
-# CONFIG_QCOM_COMMAND_DB is not set
-# CONFIG_QCOM_GENI_SE is not set
-# CONFIG_QCOM_GSBI is not set
-# CONFIG_QCOM_LLCC is not set
-# CONFIG_QCOM_RMTFS_MEM is not set
-# CONFIG_QCOM_RPMH is not set
-# CONFIG_QCOM_SMEM is not set
-# CONFIG_SOC_TI is not set
-
-#
-# Xilinx SoC drivers
-#
-# CONFIG_XILINX_VCU is not set
-# CONFIG_PM_DEVFREQ is not set
-CONFIG_EXTCON=y
-
-#
-# Extcon Device Drivers
-#
-CONFIG_EXTCON_GPIO=m
-# CONFIG_EXTCON_MAX3355 is not set
-# CONFIG_EXTCON_QCOM_SPMI_MISC is not set
-# CONFIG_EXTCON_RT8973A is not set
-# CONFIG_EXTCON_SM5502 is not set
-# CONFIG_EXTCON_USB_GPIO is not set
-# CONFIG_MEMORY is not set
-# CONFIG_IIO is not set
-# CONFIG_NTB is not set
-# CONFIG_VME_BUS is not set
-CONFIG_PWM=y
-CONFIG_PWM_SYSFS=y
-# CONFIG_PWM_FSL_FTM is not set
-# CONFIG_PWM_HIBVT is not set
-# CONFIG_PWM_PCA9685 is not set
-
-#
-# IRQ chip support
-#
-CONFIG_IRQCHIP=y
-CONFIG_ARM_GIC=y
-CONFIG_ARM_GIC_MAX_NR=1
-CONFIG_ARM_GIC_V2M=y
-CONFIG_ARM_GIC_V3=y
-CONFIG_ARM_GIC_V3_ITS=y
-CONFIG_ARM_GIC_V3_ITS_PCI=y
-CONFIG_HISILICON_IRQ_MBIGEN=y
-CONFIG_PARTITION_PERCPU=y
-CONFIG_QCOM_IRQ_COMBINER=y
-# CONFIG_QCOM_PDC is not set
-# CONFIG_IPACK_BUS is not set
-CONFIG_RESET_CONTROLLER=y
-# CONFIG_RESET_QCOM_AOSS is not set
-# CONFIG_RESET_TI_SYSCON is not set
-CONFIG_COMMON_RESET_HI3660=y
-CONFIG_COMMON_RESET_HI6220=y
-CONFIG_FMC=m
-CONFIG_FMC_FAKEDEV=m
-CONFIG_FMC_TRIVIAL=m
-CONFIG_FMC_WRITE_EEPROM=m
-CONFIG_FMC_CHARDEV=m
-
-#
-# PHY Subsystem
-#
-CONFIG_GENERIC_PHY=y
-CONFIG_PHY_XGENE=y
-# CONFIG_BCM_KONA_USB2_PHY is not set
-CONFIG_PHY_HI6220_USB=m
-# CONFIG_PHY_HISTB_COMBPHY is not set
-# CONFIG_PHY_HISI_INNO_USB2 is not set
-# CONFIG_PHY_PXA_28NM_HSIC is not set
-# CONFIG_PHY_PXA_28NM_USB2 is not set
-# CONFIG_PHY_MAPPHONE_MDM6600 is not set
-# CONFIG_PHY_QCOM_APQ8064_SATA is not set
-# CONFIG_PHY_QCOM_IPQ806X_SATA is not set
-# CONFIG_PHY_QCOM_QMP is not set
-# CONFIG_PHY_QCOM_QUSB2 is not set
-# CONFIG_PHY_QCOM_UFS is not set
-# CONFIG_PHY_QCOM_USB_HS is not set
-# CONFIG_PHY_QCOM_USB_HSIC is not set
-# CONFIG_PHY_TUSB1210 is not set
-# CONFIG_POWERCAP is not set
-# CONFIG_MCB is not set
-
-#
-# Performance monitor support
-#
-# CONFIG_ARM_CCI_PMU is not set
-CONFIG_ARM_CCN=y
-CONFIG_ARM_PMU=y
-CONFIG_ARM_PMU_ACPI=y
-# CONFIG_ARM_SMMU_V3_PMU is not set
-# CONFIG_ARM_DSU_PMU is not set
-CONFIG_HISI_PMU=y
-CONFIG_QCOM_L2_PMU=y
-CONFIG_QCOM_L3_PMU=y
-CONFIG_XGENE_PMU=y
-CONFIG_ARM_SPE_PMU=y
-CONFIG_RAS=y
-
-#
-# Android
-#
-# CONFIG_ANDROID is not set
-CONFIG_LIBNVDIMM=m
-CONFIG_BLK_DEV_PMEM=m
-CONFIG_ND_BLK=m
-CONFIG_ND_CLAIM=y
-CONFIG_ND_BTT=m
-CONFIG_BTT=y
-CONFIG_OF_PMEM=m
-CONFIG_DAX_DRIVER=y
-CONFIG_DAX=y
-CONFIG_DEV_DAX=m
-CONFIG_NVMEM=y
-# CONFIG_QCOM_QFPROM is not set
-
-#
-# HW tracing support
-#
-# CONFIG_STM is not set
-# CONFIG_INTEL_TH is not set
-# CONFIG_FPGA is not set
-# CONFIG_FSI is not set
-CONFIG_TEE=m
-
-#
-# TEE drivers
-#
-# CONFIG_OPTEE is not set
-# CONFIG_SIOX is not set
-# CONFIG_SLIMBUS is not set
-
-#
-# File systems
-#
-CONFIG_DCACHE_WORD_ACCESS=y
-CONFIG_FS_IOMAP=y
-# CONFIG_EXT2_FS is not set
-# CONFIG_EXT3_FS is not set
-CONFIG_EXT4_FS=m
-CONFIG_EXT4_USE_FOR_EXT2=y
-CONFIG_EXT4_FS_POSIX_ACL=y
-CONFIG_EXT4_FS_SECURITY=y
-# CONFIG_EXT4_ENCRYPTION is not set
-# CONFIG_EXT4_DEBUG is not set
-CONFIG_JBD2=m
-# CONFIG_JBD2_DEBUG is not set
-CONFIG_FS_MBCACHE=m
-# CONFIG_REISERFS_FS is not set
-# CONFIG_JFS_FS is not set
-CONFIG_XFS_FS=m
-CONFIG_XFS_QUOTA=y
-CONFIG_XFS_POSIX_ACL=y
-# CONFIG_XFS_RT is not set
-# CONFIG_XFS_ONLINE_SCRUB is not set
-# CONFIG_XFS_WARN is not set
-# CONFIG_XFS_DEBUG is not set
-# CONFIG_GFS2_FS is not set
-# CONFIG_OCFS2_FS is not set
-# CONFIG_BTRFS_FS is not set
-# CONFIG_NILFS2_FS is not set
-# CONFIG_F2FS_FS is not set
-CONFIG_FS_DAX=y
-CONFIG_FS_POSIX_ACL=y
-CONFIG_EXPORTFS=y
-CONFIG_EXPORTFS_BLOCK_OPS=y
-CONFIG_FILE_LOCKING=y
-CONFIG_MANDATORY_FILE_LOCKING=y
-# CONFIG_FS_ENCRYPTION is not set
-CONFIG_FSNOTIFY=y
-CONFIG_DNOTIFY=y
-CONFIG_INOTIFY_USER=y
-CONFIG_FANOTIFY=y
-CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
-CONFIG_QUOTA=y
-CONFIG_QUOTA_NETLINK_INTERFACE=y
-CONFIG_PRINT_QUOTA_WARNING=y
-# CONFIG_QUOTA_DEBUG is not set
-CONFIG_QUOTA_TREE=y
-# CONFIG_QFMT_V1 is not set
-CONFIG_QFMT_V2=y
-CONFIG_QUOTACTL=y
-CONFIG_AUTOFS4_FS=y
-CONFIG_AUTOFS_FS=y
-CONFIG_FUSE_FS=m
-CONFIG_CUSE=m
-CONFIG_OVERLAY_FS=m
-# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
-CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=y
-# CONFIG_OVERLAY_FS_INDEX is not set
-# CONFIG_OVERLAY_FS_XINO_AUTO is not set
-# CONFIG_OVERLAY_FS_METACOPY is not set
-
-#
-# Caches
-#
-CONFIG_FSCACHE=m
-CONFIG_FSCACHE_STATS=y
-# CONFIG_FSCACHE_HISTOGRAM is not set
-# CONFIG_FSCACHE_DEBUG is not set
-# CONFIG_FSCACHE_OBJECT_LIST is not set
-CONFIG_CACHEFILES=m
-# CONFIG_CACHEFILES_DEBUG is not set
-# CONFIG_CACHEFILES_HISTOGRAM is not set
-
-#
-# CD-ROM/DVD Filesystems
-#
-CONFIG_ISO9660_FS=m
-CONFIG_JOLIET=y
-CONFIG_ZISOFS=y
-CONFIG_UDF_FS=m
-
-#
-# DOS/FAT/NT Filesystems
-#
-CONFIG_FAT_FS=m
-CONFIG_MSDOS_FS=m
-CONFIG_VFAT_FS=m
-CONFIG_FAT_DEFAULT_CODEPAGE=437
-CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
-# CONFIG_FAT_DEFAULT_UTF8 is not set
-# CONFIG_NTFS_FS is not set
-
-#
-# Pseudo filesystems
-#
-CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
-CONFIG_PROC_VMCORE=y
-# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
-CONFIG_PROC_SYSCTL=y
-CONFIG_PROC_PAGE_MONITOR=y
-CONFIG_PROC_CHILDREN=y
-CONFIG_KERNFS=y
-CONFIG_SYSFS=y
-CONFIG_TMPFS=y
-CONFIG_TMPFS_POSIX_ACL=y
-CONFIG_TMPFS_XATTR=y
-CONFIG_HUGETLBFS=y
-CONFIG_HUGETLB_PAGE=y
-CONFIG_MEMFD_CREATE=y
-CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
-CONFIG_CONFIGFS_FS=y
-CONFIG_EFIVAR_FS=y
-CONFIG_MISC_FILESYSTEMS=y
-# CONFIG_ORANGEFS_FS is not set
-# CONFIG_ADFS_FS is not set
-# CONFIG_AFFS_FS is not set
-# CONFIG_ECRYPT_FS is not set
-# CONFIG_HFS_FS is not set
-# CONFIG_HFSPLUS_FS is not set
-# CONFIG_BEFS_FS is not set
-# CONFIG_BFS_FS is not set
-# CONFIG_EFS_FS is not set
-# CONFIG_JFFS2_FS is not set
-# CONFIG_UBIFS_FS is not set
-CONFIG_CRAMFS=m
-CONFIG_CRAMFS_BLOCKDEV=y
-# CONFIG_CRAMFS_MTD is not set
-CONFIG_SQUASHFS=m
-CONFIG_SQUASHFS_FILE_CACHE=y
-# CONFIG_SQUASHFS_FILE_DIRECT is not set
-CONFIG_SQUASHFS_DECOMP_SINGLE=y
-# CONFIG_SQUASHFS_DECOMP_MULTI is not set
-# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set
-CONFIG_SQUASHFS_XATTR=y
-CONFIG_SQUASHFS_ZLIB=y
-CONFIG_SQUASHFS_LZ4=y
-CONFIG_SQUASHFS_LZO=y
-CONFIG_SQUASHFS_XZ=y
-# CONFIG_SQUASHFS_ZSTD is not set
-# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
-# CONFIG_SQUASHFS_EMBEDDED is not set
-CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
-# CONFIG_VXFS_FS is not set
-# CONFIG_MINIX_FS is not set
-# CONFIG_OMFS_FS is not set
-# CONFIG_HPFS_FS is not set
-# CONFIG_QNX4FS_FS is not set
-# CONFIG_QNX6FS_FS is not set
-# CONFIG_ROMFS_FS is not set
-CONFIG_PSTORE=y
-CONFIG_PSTORE_DEFLATE_COMPRESS=y
-# CONFIG_PSTORE_LZO_COMPRESS is not set
-# CONFIG_PSTORE_LZ4_COMPRESS is not set
-# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
-# CONFIG_PSTORE_842_COMPRESS is not set
-# CONFIG_PSTORE_ZSTD_COMPRESS is not set
-CONFIG_PSTORE_COMPRESS=y
-CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
-CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
-# CONFIG_PSTORE_CONSOLE is not set
-# CONFIG_PSTORE_PMSG is not set
-# CONFIG_PSTORE_FTRACE is not set
-CONFIG_PSTORE_RAM=m
-# CONFIG_SYSV_FS is not set
-# CONFIG_UFS_FS is not set
-CONFIG_NETWORK_FILESYSTEMS=y
-CONFIG_NFS_FS=m
-CONFIG_NFS_V2=m
-CONFIG_NFS_V3=m
-CONFIG_NFS_V3_ACL=y
-CONFIG_NFS_V4=m
-# CONFIG_NFS_SWAP is not set
-CONFIG_NFS_V4_1=y
-CONFIG_NFS_V4_2=y
-CONFIG_PNFS_FILE_LAYOUT=m
-CONFIG_PNFS_BLOCK=m
-CONFIG_PNFS_FLEXFILE_LAYOUT=m
-CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
-# CONFIG_NFS_V4_1_MIGRATION is not set
-CONFIG_NFS_V4_SECURITY_LABEL=y
-CONFIG_NFS_FSCACHE=y
-# CONFIG_NFS_USE_LEGACY_DNS is not set
-CONFIG_NFS_USE_KERNEL_DNS=y
-CONFIG_NFS_DEBUG=y
-CONFIG_NFSD=m
-CONFIG_NFSD_V2_ACL=y
-CONFIG_NFSD_V3=y
-CONFIG_NFSD_V3_ACL=y
-CONFIG_NFSD_V4=y
-# CONFIG_NFSD_BLOCKLAYOUT is not set
-# CONFIG_NFSD_SCSILAYOUT is not set
-# CONFIG_NFSD_FLEXFILELAYOUT is not set
-CONFIG_NFSD_V4_SECURITY_LABEL=y
-# CONFIG_NFSD_FAULT_INJECTION is not set
-CONFIG_GRACE_PERIOD=m
-CONFIG_LOCKD=m
-CONFIG_LOCKD_V4=y
-CONFIG_NFS_ACL_SUPPORT=m
-CONFIG_NFS_COMMON=y
-CONFIG_SUNRPC=m
-CONFIG_SUNRPC_GSS=m
-CONFIG_SUNRPC_BACKCHANNEL=y
-CONFIG_RPCSEC_GSS_KRB5=m
-CONFIG_SUNRPC_DEBUG=y
-CONFIG_SUNRPC_XPRT_RDMA=m
-CONFIG_CEPH_FS=m
-# CONFIG_CEPH_FSCACHE is not set
-CONFIG_CEPH_FS_POSIX_ACL=y
-CONFIG_CIFS=m
-# CONFIG_CIFS_STATS2 is not set
-CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
-CONFIG_CIFS_WEAK_PW_HASH=y
-CONFIG_CIFS_UPCALL=y
-CONFIG_CIFS_XATTR=y
-CONFIG_CIFS_POSIX=y
-CONFIG_CIFS_ACL=y
-CONFIG_CIFS_DEBUG=y
-# CONFIG_CIFS_DEBUG2 is not set
-# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
-CONFIG_CIFS_DFS_UPCALL=y
-# CONFIG_CIFS_SMB_DIRECT is not set
-# CONFIG_CIFS_FSCACHE is not set
-# CONFIG_CODA_FS is not set
-# CONFIG_AFS_FS is not set
-CONFIG_NLS=y
-CONFIG_NLS_DEFAULT="utf8"
-CONFIG_NLS_CODEPAGE_437=y
-CONFIG_NLS_CODEPAGE_737=m
-CONFIG_NLS_CODEPAGE_775=m
-CONFIG_NLS_CODEPAGE_850=m
-CONFIG_NLS_CODEPAGE_852=m
-CONFIG_NLS_CODEPAGE_855=m
-CONFIG_NLS_CODEPAGE_857=m
-CONFIG_NLS_CODEPAGE_860=m
-CONFIG_NLS_CODEPAGE_861=m
-CONFIG_NLS_CODEPAGE_862=m
-CONFIG_NLS_CODEPAGE_863=m
-CONFIG_NLS_CODEPAGE_864=m
-CONFIG_NLS_CODEPAGE_865=m
-CONFIG_NLS_CODEPAGE_866=m
-CONFIG_NLS_CODEPAGE_869=m
-CONFIG_NLS_CODEPAGE_936=m
-CONFIG_NLS_CODEPAGE_950=m
-CONFIG_NLS_CODEPAGE_932=m
-CONFIG_NLS_CODEPAGE_949=m
-CONFIG_NLS_CODEPAGE_874=m
-CONFIG_NLS_ISO8859_8=m
-CONFIG_NLS_CODEPAGE_1250=m
-CONFIG_NLS_CODEPAGE_1251=m
-CONFIG_NLS_ASCII=y
-CONFIG_NLS_ISO8859_1=m
-CONFIG_NLS_ISO8859_2=m
-CONFIG_NLS_ISO8859_3=m
-CONFIG_NLS_ISO8859_4=m
-CONFIG_NLS_ISO8859_5=m
-CONFIG_NLS_ISO8859_6=m
-CONFIG_NLS_ISO8859_7=m
-CONFIG_NLS_ISO8859_9=m
-CONFIG_NLS_ISO8859_13=m
-CONFIG_NLS_ISO8859_14=m
-CONFIG_NLS_ISO8859_15=m
-CONFIG_NLS_KOI8_R=m
-CONFIG_NLS_KOI8_U=m
-CONFIG_NLS_MAC_ROMAN=m
-CONFIG_NLS_MAC_CELTIC=m
-CONFIG_NLS_MAC_CENTEURO=m
-CONFIG_NLS_MAC_CROATIAN=m
-CONFIG_NLS_MAC_CYRILLIC=m
-CONFIG_NLS_MAC_GAELIC=m
-CONFIG_NLS_MAC_GREEK=m
-CONFIG_NLS_MAC_ICELAND=m
-CONFIG_NLS_MAC_INUIT=m
-CONFIG_NLS_MAC_ROMANIAN=m
-CONFIG_NLS_MAC_TURKISH=m
-CONFIG_NLS_UTF8=m
-# CONFIG_DLM is not set
-CONFIG_RESCTRL=y
-
-#
-# Security options
-#
-CONFIG_KEYS=y
-CONFIG_KEYS_COMPAT=y
-CONFIG_PERSISTENT_KEYRINGS=y
-CONFIG_BIG_KEYS=y
-CONFIG_TRUSTED_KEYS=m
-CONFIG_ENCRYPTED_KEYS=m
-# CONFIG_KEY_DH_OPERATIONS is not set
-# CONFIG_SECURITY_DMESG_RESTRICT is not set
-CONFIG_SECURITY=y
-CONFIG_SECURITY_WRITABLE_HOOKS=y
-CONFIG_SECURITYFS=y
-CONFIG_SECURITY_NETWORK=y
-CONFIG_SECURITY_INFINIBAND=y
-CONFIG_SECURITY_NETWORK_XFRM=y
-# CONFIG_SECURITY_PATH is not set
-CONFIG_LSM_MMAP_MIN_ADDR=65535
-CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
-CONFIG_HARDENED_USERCOPY=y
-CONFIG_HARDENED_USERCOPY_FALLBACK=y
-# CONFIG_HARDENED_USERCOPY_PAGESPAN is not set
-CONFIG_FORTIFY_SOURCE=y
-# CONFIG_STATIC_USERMODEHELPER is not set
-CONFIG_SECURITY_SELINUX=y
-CONFIG_SECURITY_SELINUX_BOOTPARAM=y
-CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
-CONFIG_SECURITY_SELINUX_DISABLE=y
-CONFIG_SECURITY_SELINUX_DEVELOP=y
-CONFIG_SECURITY_SELINUX_AVC_STATS=y
-CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
-# CONFIG_SECURITY_SMACK is not set
-# CONFIG_SECURITY_TOMOYO is not set
-# CONFIG_SECURITY_APPARMOR is not set
-# CONFIG_SECURITY_LOADPIN is not set
-CONFIG_SECURITY_YAMA=y
-# CONFIG_INTEGRITY is not set
-CONFIG_DEFAULT_SECURITY_SELINUX=y
-# CONFIG_DEFAULT_SECURITY_DAC is not set
-CONFIG_DEFAULT_SECURITY="selinux"
-CONFIG_XOR_BLOCKS=m
-CONFIG_ASYNC_CORE=m
-CONFIG_ASYNC_MEMCPY=m
-CONFIG_ASYNC_XOR=m
-CONFIG_ASYNC_PQ=m
-CONFIG_ASYNC_RAID6_RECOV=m
-CONFIG_CRYPTO=y
-
-#
-# Crypto core or helper
-#
-CONFIG_CRYPTO_FIPS=y
-CONFIG_CRYPTO_ALGAPI=y
-CONFIG_CRYPTO_ALGAPI2=y
-CONFIG_CRYPTO_AEAD=y
-CONFIG_CRYPTO_AEAD2=y
-CONFIG_CRYPTO_BLKCIPHER=y
-CONFIG_CRYPTO_BLKCIPHER2=y
-CONFIG_CRYPTO_HASH=y
-CONFIG_CRYPTO_HASH2=y
-CONFIG_CRYPTO_RNG=y
-CONFIG_CRYPTO_RNG2=y
-CONFIG_CRYPTO_RNG_DEFAULT=y
-CONFIG_CRYPTO_AKCIPHER2=y
-CONFIG_CRYPTO_AKCIPHER=y
-CONFIG_CRYPTO_KPP2=y
-CONFIG_CRYPTO_ACOMP2=y
-CONFIG_CRYPTO_RSA=y
-# CONFIG_CRYPTO_DH is not set
-# CONFIG_CRYPTO_ECDH is not set
-CONFIG_CRYPTO_MANAGER=y
-CONFIG_CRYPTO_MANAGER2=y
-CONFIG_CRYPTO_USER=m
-# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
-CONFIG_CRYPTO_GF128MUL=y
-CONFIG_CRYPTO_NULL=y
-CONFIG_CRYPTO_NULL2=y
-CONFIG_CRYPTO_PCRYPT=m
-CONFIG_CRYPTO_WORKQUEUE=y
-CONFIG_CRYPTO_CRYPTD=m
-CONFIG_CRYPTO_AUTHENC=m
-CONFIG_CRYPTO_TEST=m
-CONFIG_CRYPTO_SIMD=m
-CONFIG_CRYPTO_ENGINE=m
-
-#
-# Authenticated Encryption with Associated Data
-#
-CONFIG_CRYPTO_CCM=m
-CONFIG_CRYPTO_GCM=y
-CONFIG_CRYPTO_CHACHA20POLY1305=m
-# CONFIG_CRYPTO_AEGIS128 is not set
-# CONFIG_CRYPTO_AEGIS128L is not set
-# CONFIG_CRYPTO_AEGIS256 is not set
-# CONFIG_CRYPTO_MORUS640 is not set
-# CONFIG_CRYPTO_MORUS1280 is not set
-CONFIG_CRYPTO_SEQIV=y
-CONFIG_CRYPTO_ECHAINIV=m
-
-#
-# Block modes
-#
-CONFIG_CRYPTO_CBC=y
-# CONFIG_CRYPTO_CFB is not set
-CONFIG_CRYPTO_CTR=y
-CONFIG_CRYPTO_CTS=m
-CONFIG_CRYPTO_ECB=y
-CONFIG_CRYPTO_LRW=m
-CONFIG_CRYPTO_PCBC=m
-CONFIG_CRYPTO_XTS=m
-# CONFIG_CRYPTO_KEYWRAP is not set
-
-#
-# Hash modes
-#
-CONFIG_CRYPTO_CMAC=m
-CONFIG_CRYPTO_HMAC=y
-CONFIG_CRYPTO_XCBC=m
-CONFIG_CRYPTO_VMAC=m
-
-#
-# Digest
-#
-CONFIG_CRYPTO_CRC32C=y
-CONFIG_CRYPTO_CRC32=m
-CONFIG_CRYPTO_CRCT10DIF=y
-CONFIG_CRYPTO_GHASH=y
-CONFIG_CRYPTO_POLY1305=m
-CONFIG_CRYPTO_MD4=m
-CONFIG_CRYPTO_MD5=y
-CONFIG_CRYPTO_MICHAEL_MIC=m
-CONFIG_CRYPTO_RMD128=m
-CONFIG_CRYPTO_RMD160=m
-CONFIG_CRYPTO_RMD256=m
-CONFIG_CRYPTO_RMD320=m
-CONFIG_CRYPTO_SHA1=y
-CONFIG_CRYPTO_SHA256=y
-CONFIG_CRYPTO_SHA512=m
-CONFIG_CRYPTO_SHA3=m
-# CONFIG_CRYPTO_SM3 is not set
-CONFIG_CRYPTO_TGR192=m
-CONFIG_CRYPTO_WP512=m
-
-#
-# Ciphers
-#
-CONFIG_CRYPTO_AES=y
-# CONFIG_CRYPTO_AES_TI is not set
-CONFIG_CRYPTO_ANUBIS=m
-CONFIG_CRYPTO_ARC4=m
-CONFIG_CRYPTO_BLOWFISH=m
-CONFIG_CRYPTO_BLOWFISH_COMMON=m
-CONFIG_CRYPTO_CAMELLIA=m
-CONFIG_CRYPTO_CAST_COMMON=m
-CONFIG_CRYPTO_CAST5=m
-CONFIG_CRYPTO_CAST6=m
-CONFIG_CRYPTO_DES=m
-CONFIG_CRYPTO_FCRYPT=m
-CONFIG_CRYPTO_KHAZAD=m
-CONFIG_CRYPTO_SALSA20=m
-CONFIG_CRYPTO_CHACHA20=m
-CONFIG_CRYPTO_SEED=m
-CONFIG_CRYPTO_SERPENT=m
-CONFIG_CRYPTO_SM4=m
-CONFIG_CRYPTO_TEA=m
-CONFIG_CRYPTO_TWOFISH=m
-CONFIG_CRYPTO_TWOFISH_COMMON=m
-
-#
-# Compression
-#
-CONFIG_CRYPTO_DEFLATE=y
-CONFIG_CRYPTO_LZO=y
-# CONFIG_CRYPTO_842 is not set
-CONFIG_CRYPTO_LZ4=m
-CONFIG_CRYPTO_LZ4HC=m
-# CONFIG_CRYPTO_ZSTD is not set
-
-#
-# Random Number Generation
-#
-CONFIG_CRYPTO_ANSI_CPRNG=m
-CONFIG_CRYPTO_DRBG_MENU=y
-CONFIG_CRYPTO_DRBG_HMAC=y
-CONFIG_CRYPTO_DRBG_HASH=y
-CONFIG_CRYPTO_DRBG_CTR=y
-CONFIG_CRYPTO_DRBG=y
-CONFIG_CRYPTO_JITTERENTROPY=y
-CONFIG_CRYPTO_USER_API=y
-CONFIG_CRYPTO_USER_API_HASH=y
-CONFIG_CRYPTO_USER_API_SKCIPHER=y
-CONFIG_CRYPTO_USER_API_RNG=y
-CONFIG_CRYPTO_USER_API_AEAD=y
-CONFIG_CRYPTO_HASH_INFO=y
-CONFIG_CRYPTO_HW=y
-CONFIG_CRYPTO_DEV_CCP=y
-CONFIG_CRYPTO_DEV_CCP_DD=m
-CONFIG_CRYPTO_DEV_SP_CCP=y
-CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
-CONFIG_CRYPTO_DEV_CPT=m
-CONFIG_CAVIUM_CPT=m
-# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
-CONFIG_CRYPTO_DEV_CAVIUM_ZIP=m
-# CONFIG_CRYPTO_DEV_QCE is not set
-# CONFIG_CRYPTO_DEV_QCOM_RNG is not set
-CONFIG_CRYPTO_DEV_CHELSIO=m
-CONFIG_CHELSIO_IPSEC_INLINE=y
-# CONFIG_CRYPTO_DEV_CHELSIO_TLS is not set
-CONFIG_CRYPTO_DEV_VIRTIO=m
-# CONFIG_CRYPTO_DEV_CCREE is not set
-# CONFIG_CRYPTO_DEV_HISI_SEC is not set
-# CONFIG_CRYPTO_DEV_HISILICON is not set
-CONFIG_ASYMMETRIC_KEY_TYPE=y
-CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
-CONFIG_X509_CERTIFICATE_PARSER=y
-CONFIG_PKCS7_MESSAGE_PARSER=y
-# CONFIG_PKCS7_TEST_KEY is not set
-CONFIG_SIGNED_PE_FILE_VERIFICATION=y
-# CONFIG_PGP_LIBRARY is not set
-# CONFIG_PGP_KEY_PARSER is not set
-# CONFIG_PGP_PRELOAD is not set
-
-#
-# Certificates for signature checking
-#
-CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
-CONFIG_SYSTEM_TRUSTED_KEYRING=y
-CONFIG_SYSTEM_TRUSTED_KEYS=""
-# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
-# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
-# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
-# CONFIG_PGP_PRELOAD_PUBLIC_KEYS is not set
-CONFIG_BINARY_PRINTF=y
-
-#
-# Library routines
-#
-CONFIG_RAID6_PQ=m
-CONFIG_BITREVERSE=y
-CONFIG_HAVE_ARCH_BITREVERSE=y
-CONFIG_RATIONAL=y
-CONFIG_GENERIC_STRNCPY_FROM_USER=y
-CONFIG_GENERIC_STRNLEN_USER=y
-CONFIG_GENERIC_NET_UTILS=y
-CONFIG_GENERIC_PCI_IOMAP=y
-CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
-CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
-CONFIG_INDIRECT_PIO=y
-CONFIG_CRC_CCITT=y
-CONFIG_CRC16=y
-CONFIG_CRC_T10DIF=y
-CONFIG_CRC_ITU_T=m
-CONFIG_CRC32=y
-# CONFIG_CRC32_SELFTEST is not set
-CONFIG_CRC32_SLICEBY8=y
-# CONFIG_CRC32_SLICEBY4 is not set
-# CONFIG_CRC32_SARWATE is not set
-# CONFIG_CRC32_BIT is not set
-# CONFIG_CRC64 is not set
-# CONFIG_CRC4 is not set
-CONFIG_CRC7=m
-CONFIG_LIBCRC32C=m
-CONFIG_CRC8=m
-CONFIG_AUDIT_GENERIC=y
-CONFIG_AUDIT_ARCH_COMPAT_GENERIC=y
-CONFIG_AUDIT_COMPAT_GENERIC=y
-# CONFIG_RANDOM32_SELFTEST is not set
-CONFIG_ZLIB_INFLATE=y
-CONFIG_ZLIB_DEFLATE=y
-CONFIG_LZO_COMPRESS=y
-CONFIG_LZO_DECOMPRESS=y
-CONFIG_LZ4_COMPRESS=m
-CONFIG_LZ4HC_COMPRESS=m
-CONFIG_LZ4_DECOMPRESS=y
-CONFIG_XZ_DEC=y
-CONFIG_XZ_DEC_X86=y
-CONFIG_XZ_DEC_POWERPC=y
-CONFIG_XZ_DEC_IA64=y
-CONFIG_XZ_DEC_ARM=y
-CONFIG_XZ_DEC_ARMTHUMB=y
-CONFIG_XZ_DEC_SPARC=y
-CONFIG_XZ_DEC_BCJ=y
-# CONFIG_XZ_DEC_TEST is not set
-CONFIG_DECOMPRESS_GZIP=y
-CONFIG_DECOMPRESS_BZIP2=y
-CONFIG_DECOMPRESS_LZMA=y
-CONFIG_DECOMPRESS_XZ=y
-CONFIG_DECOMPRESS_LZO=y
-CONFIG_DECOMPRESS_LZ4=y
-CONFIG_GENERIC_ALLOCATOR=y
-CONFIG_REED_SOLOMON=m
-CONFIG_REED_SOLOMON_ENC8=y
-CONFIG_REED_SOLOMON_DEC8=y
-CONFIG_TEXTSEARCH=y
-CONFIG_TEXTSEARCH_KMP=m
-CONFIG_TEXTSEARCH_BM=m
-CONFIG_TEXTSEARCH_FSM=m
-CONFIG_BTREE=y
-CONFIG_INTERVAL_TREE=y
-CONFIG_RADIX_TREE_MULTIORDER=y
-CONFIG_ASSOCIATIVE_ARRAY=y
-CONFIG_HAS_IOMEM=y
-CONFIG_HAS_IOPORT_MAP=y
-CONFIG_HAS_DMA=y
-CONFIG_NEED_SG_DMA_LENGTH=y
-CONFIG_NEED_DMA_MAP_STATE=y
-CONFIG_ARCH_DMA_ADDR_T_64BIT=y
-CONFIG_HAVE_GENERIC_DMA_COHERENT=y
-CONFIG_DMA_DIRECT_OPS=y
-CONFIG_DMA_VIRT_OPS=y
-CONFIG_SWIOTLB=y
-CONFIG_SGL_ALLOC=y
-CONFIG_CHECK_SIGNATURE=y
-CONFIG_CPU_RMAP=y
-CONFIG_DQL=y
-CONFIG_GLOB=y
-# CONFIG_GLOB_SELFTEST is not set
-CONFIG_NLATTR=y
-CONFIG_CLZ_TAB=y
-CONFIG_CORDIC=m
-# CONFIG_DDR is not set
-CONFIG_IRQ_POLL=y
-CONFIG_MPILIB=y
-CONFIG_LIBFDT=y
-CONFIG_OID_REGISTRY=y
-CONFIG_UCS2_STRING=y
-CONFIG_FONT_SUPPORT=y
-# CONFIG_FONTS is not set
-CONFIG_FONT_8x8=y
-CONFIG_FONT_8x16=y
-CONFIG_SG_POOL=y
-CONFIG_ARCH_HAS_SG_CHAIN=y
-CONFIG_ARCH_HAS_PMEM_API=y
-CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
-CONFIG_SBITMAP=y
-# CONFIG_STRING_SELFTEST is not set
-
-#
-# Kernel hacking
-#
-
-#
-# printk and dmesg options
-#
-CONFIG_PRINTK_TIME=y
-CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
-CONFIG_CONSOLE_LOGLEVEL_QUIET=4
-CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
-CONFIG_BOOT_PRINTK_DELAY=y
-CONFIG_DYNAMIC_DEBUG=y
-
-#
-# Compile-time checks and compiler options
-#
-CONFIG_DEBUG_INFO=y
-# CONFIG_DEBUG_INFO_REDUCED is not set
-# CONFIG_DEBUG_INFO_SPLIT is not set
-CONFIG_DEBUG_INFO_DWARF4=y
-# CONFIG_GDB_SCRIPTS is not set
-CONFIG_ENABLE_MUST_CHECK=y
-CONFIG_FRAME_WARN=2048
-CONFIG_STRIP_ASM_SYMS=y
-# CONFIG_READABLE_ASM is not set
-# CONFIG_UNUSED_SYMBOLS is not set
-# CONFIG_PAGE_OWNER is not set
-CONFIG_DEBUG_FS=y
-CONFIG_HEADERS_CHECK=y
-CONFIG_DEBUG_SECTION_MISMATCH=y
-CONFIG_SECTION_MISMATCH_WARN_ONLY=y
-CONFIG_ARCH_WANT_FRAME_POINTERS=y
-CONFIG_FRAME_POINTER=y
-# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
-CONFIG_MAGIC_SYSRQ=y
-CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
-CONFIG_MAGIC_SYSRQ_SERIAL=y
-CONFIG_DEBUG_KERNEL=y
-
-#
-# Memory Debugging
-#
-# CONFIG_PAGE_EXTENSION is not set
-# CONFIG_DEBUG_PAGEALLOC is not set
-# CONFIG_PAGE_POISONING is not set
-# CONFIG_DEBUG_PAGE_REF is not set
-# CONFIG_DEBUG_RODATA_TEST is not set
-# CONFIG_DEBUG_OBJECTS is not set
-# CONFIG_SLUB_DEBUG_ON is not set
-# CONFIG_SLUB_STATS is not set
-CONFIG_HAVE_DEBUG_KMEMLEAK=y
-# CONFIG_DEBUG_KMEMLEAK is not set
-# CONFIG_DEBUG_STACK_USAGE is not set
-# CONFIG_DEBUG_VM is not set
-CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
-# CONFIG_DEBUG_VIRTUAL is not set
-CONFIG_DEBUG_MEMORY_INIT=y
-# CONFIG_DEBUG_PER_CPU_MAPS is not set
-CONFIG_HAVE_ARCH_KASAN=y
-# CONFIG_KASAN is not set
-CONFIG_ARCH_HAS_KCOV=y
-CONFIG_CC_HAS_SANCOV_TRACE_PC=y
-# CONFIG_KCOV is not set
-CONFIG_DEBUG_SHIRQ=y
-
-#
-# Debug Lockups and Hangs
-#
-CONFIG_LOCKUP_DETECTOR=y
-CONFIG_SOFTLOCKUP_DETECTOR=y
-# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
-CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
-CONFIG_HARDLOCKUP_DETECTOR_PERF=y
-CONFIG_HARDLOCKUP_DETECTOR=y
-CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
-CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
-CONFIG_DETECT_HUNG_TASK=y
-CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
-# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
-CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
-# CONFIG_WQ_WATCHDOG is not set
-CONFIG_PANIC_ON_OOPS=y
-CONFIG_PANIC_ON_OOPS_VALUE=1
-CONFIG_PANIC_TIMEOUT=0
-CONFIG_SCHED_DEBUG=y
-CONFIG_SCHED_INFO=y
-CONFIG_SCHEDSTATS=y
-# CONFIG_SCHED_STACK_END_CHECK is not set
-# CONFIG_DEBUG_TIMEKEEPING is not set
-
-#
-# Lock Debugging (spinlocks, mutexes, etc...)
-#
-CONFIG_LOCK_DEBUGGING_SUPPORT=y
-# CONFIG_PROVE_LOCKING is not set
-# CONFIG_LOCK_STAT is not set
-# CONFIG_DEBUG_RT_MUTEXES is not set
-# CONFIG_DEBUG_SPINLOCK is not set
-# CONFIG_DEBUG_MUTEXES is not set
-# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
-# CONFIG_DEBUG_RWSEMS is not set
-# CONFIG_DEBUG_LOCK_ALLOC is not set
-# CONFIG_DEBUG_ATOMIC_SLEEP is not set
-# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
-# CONFIG_LOCK_TORTURE_TEST is not set
-# CONFIG_WW_MUTEX_SELFTEST is not set
-CONFIG_STACKTRACE=y
-# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
-# CONFIG_DEBUG_KOBJECT is not set
-CONFIG_HAVE_DEBUG_BUGVERBOSE=y
-CONFIG_DEBUG_BUGVERBOSE=y
-CONFIG_DEBUG_LIST=y
-# CONFIG_DEBUG_PI_LIST is not set
-# CONFIG_DEBUG_SG is not set
-# CONFIG_DEBUG_NOTIFIERS is not set
-# CONFIG_DEBUG_CREDENTIALS is not set
-
-#
-# RCU Debugging
-#
-# CONFIG_RCU_PERF_TEST is not set
-# CONFIG_RCU_TORTURE_TEST is not set
-CONFIG_RCU_CPU_STALL_TIMEOUT=60
-# CONFIG_RCU_TRACE is not set
-# CONFIG_RCU_EQS_DEBUG is not set
-# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
-# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
-# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
-# CONFIG_NOTIFIER_ERROR_INJECTION is not set
-# CONFIG_FAULT_INJECTION is not set
-# CONFIG_LATENCYTOP is not set
-CONFIG_NOP_TRACER=y
-CONFIG_HAVE_FUNCTION_TRACER=y
-CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
-CONFIG_HAVE_DYNAMIC_FTRACE=y
-CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
-CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
-CONFIG_HAVE_C_RECORDMCOUNT=y
-CONFIG_TRACER_MAX_TRACE=y
-CONFIG_TRACE_CLOCK=y
-CONFIG_RING_BUFFER=y
-CONFIG_EVENT_TRACING=y
-CONFIG_CONTEXT_SWITCH_TRACER=y
-CONFIG_TRACING=y
-CONFIG_GENERIC_TRACER=y
-CONFIG_TRACING_SUPPORT=y
-CONFIG_FTRACE=y
-CONFIG_FUNCTION_TRACER=y
-CONFIG_FUNCTION_GRAPH_TRACER=y
-# CONFIG_PREEMPTIRQ_EVENTS is not set
-# CONFIG_IRQSOFF_TRACER is not set
-CONFIG_SCHED_TRACER=y
-CONFIG_HWLAT_TRACER=y
-CONFIG_FTRACE_SYSCALLS=y
-CONFIG_TRACER_SNAPSHOT=y
-# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
-CONFIG_BRANCH_PROFILE_NONE=y
-# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
-CONFIG_STACK_TRACER=y
-CONFIG_BLK_DEV_IO_TRACE=y
-CONFIG_KPROBE_EVENTS=y
-CONFIG_UPROBE_EVENTS=y
-CONFIG_BPF_EVENTS=y
-CONFIG_PROBE_EVENTS=y
-CONFIG_DYNAMIC_FTRACE=y
-# CONFIG_FUNCTION_PROFILER is not set
-CONFIG_FTRACE_MCOUNT_RECORD=y
-# CONFIG_FTRACE_STARTUP_TEST is not set
-CONFIG_TRACING_MAP=y
-CONFIG_HIST_TRIGGERS=y
-# CONFIG_TRACEPOINT_BENCHMARK is not set
-CONFIG_RING_BUFFER_BENCHMARK=m
-# CONFIG_RING_BUFFER_STARTUP_TEST is not set
-# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
-# CONFIG_TRACE_EVAL_MAP_FILE is not set
-# CONFIG_TRACING_EVENTS_GPIO is not set
-# CONFIG_DMA_API_DEBUG is not set
-CONFIG_RUNTIME_TESTING_MENU=y
-# CONFIG_LKDTM is not set
-# CONFIG_TEST_LIST_SORT is not set
-# CONFIG_TEST_SORT is not set
-# CONFIG_KPROBES_SANITY_TEST is not set
-# CONFIG_BACKTRACE_SELF_TEST is not set
-# CONFIG_RBTREE_TEST is not set
-# CONFIG_INTERVAL_TREE_TEST is not set
-# CONFIG_PERCPU_TEST is not set
-CONFIG_ATOMIC64_SELFTEST=y
-CONFIG_ASYNC_RAID6_TEST=m
-# CONFIG_TEST_HEXDUMP is not set
-# CONFIG_TEST_STRING_HELPERS is not set
-CONFIG_TEST_KSTRTOX=y
-# CONFIG_TEST_PRINTF is not set
-# CONFIG_TEST_BITMAP is not set
-# CONFIG_TEST_BITFIELD is not set
-# CONFIG_TEST_UUID is not set
-# CONFIG_TEST_OVERFLOW is not set
-# CONFIG_TEST_RHASHTABLE is not set
-# CONFIG_TEST_HASH is not set
-# CONFIG_TEST_IDA is not set
-# CONFIG_TEST_LKM is not set
-# CONFIG_TEST_USER_COPY is not set
-# CONFIG_TEST_BPF is not set
-# CONFIG_FIND_BIT_BENCHMARK is not set
-# CONFIG_TEST_FIRMWARE is not set
-# CONFIG_TEST_SYSCTL is not set
-# CONFIG_TEST_UDELAY is not set
-# CONFIG_TEST_STATIC_KEYS is not set
-# CONFIG_TEST_KMOD is not set
-# CONFIG_TEST_FREE_PAGES is not set
-# CONFIG_MEMTEST is not set
-# CONFIG_BUG_ON_DATA_CORRUPTION is not set
-# CONFIG_SAMPLES is not set
-CONFIG_HAVE_ARCH_KGDB=y
-CONFIG_KGDB=y
-CONFIG_KGDB_SERIAL_CONSOLE=y
-CONFIG_KGDB_TESTS=y
-# CONFIG_KGDB_TESTS_ON_BOOT is not set
-CONFIG_KGDB_KDB=y
-CONFIG_KDB_DEFAULT_ENABLE=0x0
-CONFIG_KDB_KEYBOARD=y
-CONFIG_KDB_CONTINUE_CATASTROPHIC=0
-CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
-# CONFIG_UBSAN is not set
-CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
-CONFIG_STRICT_DEVMEM=y
-CONFIG_IO_STRICT_DEVMEM=y
-# CONFIG_ARM64_PTDUMP_DEBUGFS is not set
-# CONFIG_PID_IN_CONTEXTIDR is not set
-# CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET is not set
-# CONFIG_DEBUG_WX is not set
-# CONFIG_DEBUG_ALIGN_RODATA is not set
-# CONFIG_DEBUG_EFI is not set
-# CONFIG_ARM64_RELOC_TEST is not set
-# CONFIG_CORESIGHT is not set
--
2.25.1
1
17

[PATCH OLK-5.10 00/20] arm: reduce p2v alignment requirement to 2 MiB and adrl replacement
by Zhao Hongjiang 23 Jan '21
by Zhao Hongjiang 23 Jan '21
23 Jan '21
http://openeuler.huawei.com/bugzilla/show_bug.cgi?id=46882
Ard Biesheuvel (20):
ARM: p2v: fix handling of LPAE translation in BE mode
ARM: assembler: introduce adr_l, ldr_l and str_l macros
ARM: module: add support for place relative relocations
ARM: p2v: move patching code to separate assembler source file
ARM: p2v: factor out shared loop processing
ARM: p2v: factor out BE8 handling
ARM: p2v: drop redundant 'type' argument from __pv_stub
ARM: p2v: use relative references in patch site arrays
ARM: p2v: simplify __fixup_pv_table()
ARM: p2v: switch to MOVW for Thumb2 and ARM/LPAE
ARM: p2v: reduce p2v alignment requirement to 2 MiB
ARM: efistub: replace adrl pseudo-op with adr_l macro invocation
ARM: head-common.S: use PC-relative insn sequence for __proc_info
ARM: head-common.S: use PC-relative insn sequence for idmap creation
ARM: head.S: use PC-relative insn sequence for secondary_data
ARM: kernel: use relative references for UP/SMP alternatives
ARM: head: use PC-relative insn sequence for __smp_alt
ARM: sleep.S: use PC-relative insn sequence for
sleep_save_sp/mpidr_hash
ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET
ARM: kvm: replace open coded VA->PA calculations with adr_l call
arch/arm/Kconfig | 2 +-
arch/arm/boot/compressed/head.S | 18 +--
arch/arm/include/asm/assembler.h | 88 +++++++++++-
arch/arm/include/asm/elf.h | 5 +
arch/arm/include/asm/memory.h | 57 +++++---
arch/arm/include/asm/processor.h | 2 +-
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/head-common.S | 22 +--
arch/arm/kernel/head.S | 205 ++------------------------
arch/arm/kernel/hyp-stub.S | 27 ++--
arch/arm/kernel/module.c | 20 ++-
arch/arm/kernel/phys2virt.S | 238 +++++++++++++++++++++++++++++++
arch/arm/kernel/sleep.S | 19 +--
13 files changed, 431 insertions(+), 273 deletions(-)
create mode 100644 arch/arm/kernel/phys2virt.S
--
2.25.1
2
21

[PATCH OLK-5.10 v2 0/2] config: add initial openeuler_defconfig for arm64 & x86
by Xie XiuQi 18 Jan '21
by Xie XiuQi 18 Jan '21
18 Jan '21
Add initial configs for arm64 & x86 platform.
Use openEuler-20.03's config as base, re-generate it on 5.10 kernel.
The major changes:
- use 52 bit VA & PA
- enable CONFIG_PSI
- support more kunpeng drivers on arm64
Xie XiuQi (2):
config: add initial openeuler_defconfig for arm64
config: add initial openeuler_defconfig for x86
arch/arm64/configs/openeuler_defconfig | 7069 ++++++++++++++++++++
arch/x86/configs/openeuler_defconfig | 8273 ++++++++++++++++++++++++
2 files changed, 15342 insertions(+)
create mode 100644 arch/arm64/configs/openeuler_defconfig
create mode 100644 arch/x86/configs/openeuler_defconfig
--
2.20.1
1
2

18 Jan '21
From: Yang Shi <yang.shi(a)linux.alibaba.com>
mainline inclusion
from mainline-v5.4-rc1
commit 364c1eebe453f06f0c1e837eb155a5725c9cd272
category: bugfix
bugzilla: 47240
CVE: NA
-------------------------------------------------
Patch series "Make deferred split shrinker memcg aware", v6.
Currently THP deferred split shrinker is not memcg aware, this may cause
premature OOM with some configuration. For example the below test would
run into premature OOM easily:
$ cgcreate -g memory:thp
$ echo 4G > /sys/fs/cgroup/memory/thp/memory/limit_in_bytes
$ cgexec -g memory:thp transhuge-stress 4000
transhuge-stress comes from kernel selftest.
It is easy to hit OOM, but there are still a lot THP on the deferred split
queue, memcg direct reclaim can't touch them since the deferred split
shrinker is not memcg aware.
Convert deferred split shrinker memcg aware by introducing per memcg
deferred split queue. The THP should be on either per node or per memcg
deferred split queue if it belongs to a memcg. When the page is
immigrated to the other memcg, it will be immigrated to the target memcg's
deferred split queue too.
Reuse the second tail page's deferred_list for per memcg list since the
same THP can't be on multiple deferred split queues.
Make deferred split shrinker not depend on memcg kmem since it is not
slab. It doesn't make sense to not shrink THP even though memcg kmem is
disabled.
With the above change the test demonstrated above doesn't trigger OOM even
though with cgroup.memory=nokmem.
This patch (of 4):
Put split_queue, split_queue_lock and split_queue_len into a struct in
order to reduce code duplication when we convert deferred_split to memcg
aware in the later patches.
Link: http://lkml.kernel.org/r/1565144277-36240-2-git-send-email-yang.shi@linux.a…
Signed-off-by: Yang Shi <yang.shi(a)linux.alibaba.com>
Suggested-by: "Kirill A . Shutemov" <kirill.shutemov(a)linux.intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Reviewed-by: Kirill Tkhai <ktkhai(a)virtuozzo.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Qian Cai <cai(a)lca.pw>
Cc: Vladimir Davydov <vdavydov.dev(a)gmail.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/mmzone.h | 12 ++++++++---
mm/huge_memory.c | 45 +++++++++++++++++++++++-------------------
mm/page_alloc.c | 8 +++++---
3 files changed, 39 insertions(+), 26 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 1d7c5dd03ed89..3bd2f5e2a344f 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -612,6 +612,14 @@ struct zonelist {
extern struct page *mem_map;
#endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+struct deferred_split {
+ spinlock_t split_queue_lock;
+ struct list_head split_queue;
+ unsigned long split_queue_len;
+};
+#endif
+
/*
* On NUMA machines, each NUMA node would have a pg_data_t to describe
* it's memory layout. On UMA machines there is a single pglist_data which
@@ -698,9 +706,7 @@ typedef struct pglist_data {
#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- spinlock_t split_queue_lock;
- struct list_head split_queue;
- unsigned long split_queue_len;
+ struct deferred_split deferred_split_queue;
#endif
/* Fields commonly accessed by the page reclaim scanner */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c2013b3e92e74..936092f8d4b16 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2697,6 +2697,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
{
struct page *head = compound_head(page);
struct pglist_data *pgdata = NODE_DATA(page_to_nid(head));
+ struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
int count, mapcount, extra_pins, ret;
@@ -2786,17 +2787,17 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
}
/* Prevent deferred_split_scan() touching ->_refcount */
- spin_lock(&pgdata->split_queue_lock);
+ spin_lock(&ds_queue->split_queue_lock);
count = page_count(head);
mapcount = total_mapcount(head);
if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) {
if (!list_empty(page_deferred_list(head))) {
- pgdata->split_queue_len--;
+ ds_queue->split_queue_len--;
list_del(page_deferred_list(head));
}
if (mapping)
__dec_node_page_state(page, NR_SHMEM_THPS);
- spin_unlock(&pgdata->split_queue_lock);
+ spin_unlock(&ds_queue->split_queue_lock);
__split_huge_page(page, list, end, flags);
ret = 0;
} else {
@@ -2808,7 +2809,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
dump_page(page, "total_mapcount(head) > 0");
BUG();
}
- spin_unlock(&pgdata->split_queue_lock);
+ spin_unlock(&ds_queue->split_queue_lock);
fail: if (mapping)
xa_unlock(&mapping->i_pages);
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
@@ -2831,52 +2832,56 @@ fail: if (mapping)
void free_transhuge_page(struct page *page)
{
struct pglist_data *pgdata = NODE_DATA(page_to_nid(page));
+ struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
unsigned long flags;
- spin_lock_irqsave(&pgdata->split_queue_lock, flags);
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (!list_empty(page_deferred_list(page))) {
- pgdata->split_queue_len--;
+ ds_queue->split_queue_len--;
list_del(page_deferred_list(page));
}
- spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
+ spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
free_compound_page(page);
}
void deferred_split_huge_page(struct page *page)
{
struct pglist_data *pgdata = NODE_DATA(page_to_nid(page));
+ struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
unsigned long flags;
VM_BUG_ON_PAGE(!PageTransHuge(page), page);
- spin_lock_irqsave(&pgdata->split_queue_lock, flags);
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (list_empty(page_deferred_list(page))) {
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
- list_add_tail(page_deferred_list(page), &pgdata->split_queue);
- pgdata->split_queue_len++;
+ list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
+ ds_queue->split_queue_len++;
}
- spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
+ spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
}
static unsigned long deferred_split_count(struct shrinker *shrink,
struct shrink_control *sc)
{
struct pglist_data *pgdata = NODE_DATA(sc->nid);
- return READ_ONCE(pgdata->split_queue_len);
+ struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
+ return READ_ONCE(ds_queue->split_queue_len);
}
static unsigned long deferred_split_scan(struct shrinker *shrink,
struct shrink_control *sc)
{
struct pglist_data *pgdata = NODE_DATA(sc->nid);
+ struct deferred_split *ds_queue = &pgdata->deferred_split_queue;
unsigned long flags;
LIST_HEAD(list), *pos, *next;
struct page *page;
int split = 0;
- spin_lock_irqsave(&pgdata->split_queue_lock, flags);
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
/* Take pin on all head pages to avoid freeing them under us */
- list_for_each_safe(pos, next, &pgdata->split_queue) {
+ list_for_each_safe(pos, next, &ds_queue->split_queue) {
page = list_entry((void *)pos, struct page, mapping);
page = compound_head(page);
if (get_page_unless_zero(page)) {
@@ -2884,12 +2889,12 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
} else {
/* We lost race with put_compound_page() */
list_del_init(page_deferred_list(page));
- pgdata->split_queue_len--;
+ ds_queue->split_queue_len--;
}
if (!--sc->nr_to_scan)
break;
}
- spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
+ spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
list_for_each_safe(pos, next, &list) {
page = list_entry((void *)pos, struct page, mapping);
@@ -2903,15 +2908,15 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
put_page(page);
}
- spin_lock_irqsave(&pgdata->split_queue_lock, flags);
- list_splice_tail(&list, &pgdata->split_queue);
- spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ list_splice_tail(&list, &ds_queue->split_queue);
+ spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
/*
* Stop shrinker if we didn't split any page, but the queue is empty.
* This can happen if pages were freed under us.
*/
- if (!split && list_empty(&pgdata->split_queue))
+ if (!split && list_empty(&ds_queue->split_queue))
return SHRINK_STOP;
return split;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 67768c56d412c..91d820248690c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6371,9 +6371,11 @@ static unsigned long __init calc_memmap_size(unsigned long spanned_pages,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static void pgdat_init_split_queue(struct pglist_data *pgdat)
{
- spin_lock_init(&pgdat->split_queue_lock);
- INIT_LIST_HEAD(&pgdat->split_queue);
- pgdat->split_queue_len = 0;
+ struct deferred_split *ds_queue = &pgdat->deferred_split_queue;
+
+ spin_lock_init(&ds_queue->split_queue_lock);
+ INIT_LIST_HEAD(&ds_queue->split_queue);
+ ds_queue->split_queue_len = 0;
}
#else
static void pgdat_init_split_queue(struct pglist_data *pgdat) {}
--
2.25.1
1
5
Andreas Kemnade (1):
ARM: OMAP2+: omap_device: fix idling of devices during probe
Arnd Bergmann (4):
wil6210: select CONFIG_CRC32
block: rsxx: select CONFIG_CRC32
lightnvm: select CONFIG_CRC32
wan: ds26522: select CONFIG_BITREVERSE
Ayush Sawal (6):
chtls: Fix hardware tid leak
chtls: Remove invalid set_tcb call
chtls: Fix panic when route to peer not configured
chtls: Replace skb_dequeue with skb_peek
chtls: Added a check to avoid NULL pointer dereference
chtls: Fix chtls resources release sequence
Chris Wilson (1):
drm/i915: Fix mismatch between misplaced vma check and vma insert
Christophe JAILLET (2):
net/sonic: Fix some resource leaks in error handling paths
dmaengine: mediatek: mtk-hsdma: Fix a resource leak in the error
handling path of the probe function
Chunyan Zhang (1):
i2c: sprd: use a specific timeout to avoid system hang up issue
Colin Ian King (1):
cpufreq: powernow-k8: pass policy rather than use cpufreq_cpu_get()
Dan Carpenter (1):
regmap: debugfs: Fix a reversed if statement in regmap_debugfs_init()
Dinghao Liu (3):
iommu/intel: Fix memleak in intel_irq_remapping_alloc
net/mlx5e: Fix memleak in mlx5e_create_l2_table_groups
net/mlx5e: Fix two double free cases
Fenghua Yu (2):
x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC
MSR
x86/resctrl: Don't move a task to the same resource group
Florian Westphal (2):
net: ip: always refragment ip defragmented packets
net: fix pmtu check in nopmtudisc mode
Greg Kroah-Hartman (1):
Linux 4.19.168
Jakub Kicinski (1):
net: vlan: avoid leaks on register_vlan_dev() failures
Jouni K. Seppänen (1):
net: cdc_ncm: correct overhead in delayed_ndp_size
Lorenzo Bianconi (1):
iio: imu: st_lsm6dsx: fix edge-trigger interrupts
Lukas Wunner (1):
spi: pxa2xx: Fix use-after-free on unbind
Marc Zyngier (1):
KVM: arm64: Don't access PMCR_EL0 when no PMU is available
Ming Lei (1):
block: fix use-after-free in disk_part_iter_next
Nick Desaulniers (1):
vmlinux.lds.h: Add PGO and AutoFDO input sections
Ping Cheng (1):
HID: wacom: Fix memory leakage caused by kfifo_alloc
Roman Guskov (1):
spi: stm32: FIFO threshold level - fix align packet size
Samuel Holland (2):
net: stmmac: dwmac-sun8i: Balance internal PHY resource references
net: stmmac: dwmac-sun8i: Balance internal PHY power
Sean Nyekjaer (1):
iio: imu: st_lsm6dsx: flip irq return logic
Sean Tranchetti (1):
net: ipv6: fib: flush exceptions when purging route
Shravya Kumbham (3):
dmaengine: xilinx_dma: check dma_async_device_register return value
dmaengine: xilinx_dma: fix incompatible param warning in
_child_probe()
dmaengine: xilinx_dma: fix mixed_enum_type coverity warning
Vasily Averin (1):
net: drop bogus skb with CHECKSUM_PARTIAL and offset beyond end of
trimmed packet
Xiaolei Wang (1):
regmap: debugfs: Fix a memory leak when calling regmap_attach_dev
Makefile | 2 +-
arch/arm/mach-omap2/omap_device.c | 8 +-
arch/arm64/kvm/sys_regs.c | 4 +
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 113 ++++++++----------
block/genhd.c | 9 +-
drivers/base/regmap/regmap-debugfs.c | 9 +-
drivers/block/Kconfig | 1 +
drivers/cpufreq/powernow-k8.c | 9 +-
drivers/crypto/chelsio/chtls/chtls_cm.c | 68 ++++-------
drivers/dma/mediatek/mtk-hsdma.c | 1 +
drivers/dma/xilinx/xilinx_dma.c | 11 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 +-
drivers/hid/wacom_sys.c | 35 +++++-
drivers/i2c/busses/i2c-sprd.c | 8 +-
.../iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c | 26 +++-
drivers/iommu/intel_irq_remapping.c | 2 +
drivers/lightnvm/Kconfig | 1 +
.../net/ethernet/mellanox/mlx5/core/en_fs.c | 3 +
drivers/net/ethernet/natsemi/macsonic.c | 12 +-
drivers/net/ethernet/natsemi/xtsonic.c | 7 +-
.../net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 58 ++++++---
drivers/net/usb/cdc_ncm.c | 8 +-
drivers/net/wan/Kconfig | 1 +
drivers/net/wireless/ath/wil6210/Kconfig | 1 +
drivers/spi/spi-pxa2xx.c | 3 +-
drivers/spi/spi-stm32.c | 4 +-
include/asm-generic/vmlinux.lds.h | 5 +-
net/8021q/vlan.c | 3 +-
net/core/skbuff.c | 6 +
net/ipv4/ip_output.c | 2 +-
net/ipv4/ip_tunnel.c | 10 +-
net/ipv6/ip6_fib.c | 5 +-
32 files changed, 265 insertions(+), 172 deletions(-)
--
2.25.1
1
43

[PATCH openEuler-1.0-LTS] HID: core: Correctly handle ReportSize being zero
by Yang Yingliang 18 Jan '21
by Yang Yingliang 18 Jan '21
18 Jan '21
From: Marc Zyngier <maz(a)kernel.org>
stable inclusion
from linux-4.19.144
commit abae259fdccc5e41ff302dd80a2b944ce385c970
CVE: CVE-2020-0465
--------------------------------
commit bce1305c0ece3dc549663605e567655dd701752c upstream.
It appears that a ReportSize value of zero is legal, even if a bit
non-sensical. Most of the HID code seems to handle that gracefully,
except when computing the total size in bytes. When fed as input to
memset, this leads to some funky outcomes.
Detect the corner case and correctly compute the size.
Cc: stable(a)vger.kernel.org
Signed-off-by: Marc Zyngier <maz(a)kernel.org>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/hid/hid-core.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
index 3a359716fb38..3178580db7fa 100644
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -1419,6 +1419,17 @@ static void hid_output_field(const struct hid_device *hid,
}
}
+/*
+ * Compute the size of a report.
+ */
+static size_t hid_compute_report_size(struct hid_report *report)
+{
+ if (report->size)
+ return ((report->size - 1) >> 3) + 1;
+
+ return 0;
+}
+
/*
* Create a report. 'data' has to be allocated using
* hid_alloc_report_buf() so that it has proper size.
@@ -1431,7 +1442,7 @@ void hid_output_report(struct hid_report *report, __u8 *data)
if (report->id > 0)
*data++ = report->id;
- memset(data, 0, ((report->size - 1) >> 3) + 1);
+ memset(data, 0, hid_compute_report_size(report));
for (n = 0; n < report->maxfield; n++)
hid_output_field(report->device, report->field[n], data);
}
@@ -1558,7 +1569,7 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
csize--;
}
- rsize = ((report->size - 1) >> 3) + 1;
+ rsize = hid_compute_report_size(report);
if (rsize > HID_MAX_BUFFER_SIZE)
rsize = HID_MAX_BUFFER_SIZE;
--
2.25.1
1
0

[PATCH kernel-4.19 1/2] ascend: share pool: optimize the big lock for memory processing
by Yang Yingliang 18 Jan '21
by Yang Yingliang 18 Jan '21
18 Jan '21
From: Ding Tianhong <dingtianhong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: NA
CVE: NA
-------------------------------------------------
The sp_mutex is used to protect all critical path for share pool,
it has serious affected the performance of the the memory alloc
and release interface when there is a lot of process in the same
memory group, it will serious break the scailability of the system,
so add a new read semaphore lock to instead of the big lock for allocation
and release critical path.
The scailability has been greatly improved by this modification.
Show the test result:
number of process: alloc 4M avg time:
Before the patch: 1 32us
3 96us
10 330us
after the patch: 1 32us
3 40us
10 60us
v2: fix some conflicts and clean some code.
Signed-off-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Tang Yizhou <tangyizhou(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/share_pool.h | 5 ++
kernel/fork.c | 4 +-
mm/share_pool.c | 170 ++++++++++++++++++++-----------------
3 files changed, 100 insertions(+), 79 deletions(-)
diff --git a/include/linux/share_pool.h b/include/linux/share_pool.h
index 70b841d0eb8e..f2d17cb85fa5 100644
--- a/include/linux/share_pool.h
+++ b/include/linux/share_pool.h
@@ -93,6 +93,8 @@ struct sp_group {
unsigned long dvpp_va_start;
unsigned long dvpp_size;
atomic_t use_count;
+ /* protect the group internal elements */
+ struct rw_semaphore rw_lock;
};
struct sp_walk_data {
@@ -238,6 +240,8 @@ extern void *vmalloc_hugepage_user(unsigned long size);
extern void *buff_vzalloc_user(unsigned long size);
extern void *buff_vzalloc_hugepage_user(unsigned long size);
+void sp_exit_mm(struct mm_struct *mm);
+
#else
static inline int sp_group_add_task(int pid, int spg_id)
@@ -400,6 +404,7 @@ static inline void *buff_vzalloc_hugepage_user(unsigned long size)
{
return NULL;
}
+
#endif
#endif /* LINUX_SHARE_POOL_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index d1d8ac083c80..61496b70cfb8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1055,8 +1055,6 @@ static inline void __mmput(struct mm_struct *mm)
{
VM_BUG_ON(atomic_read(&mm->mm_users));
- sp_group_exit(mm);
-
uprobe_clear_state(mm);
exit_aio(mm);
ksm_exit(mm);
@@ -1084,6 +1082,8 @@ void mmput(struct mm_struct *mm)
{
might_sleep();
+ sp_group_exit(mm);
+
if (atomic_dec_and_test(&mm->mm_users))
__mmput(mm);
}
diff --git a/mm/share_pool.c b/mm/share_pool.c
index e326c95104da..27792a641401 100644
--- a/mm/share_pool.c
+++ b/mm/share_pool.c
@@ -197,6 +197,16 @@ static bool host_svm_sp_enable = false;
int sysctl_share_pool_hugepage_enable = 1;
+static void free_sp_group(struct sp_group *spg);
+
+static bool sp_group_get(struct sp_group *spg)
+{
+ if (spg_valid(spg) && atomic_inc_not_zero(&spg->use_count))
+ return true;
+
+ return false;
+}
+
static unsigned long spa_size(struct sp_area *spa)
{
return spa->real_size;
@@ -337,7 +347,9 @@ static struct sp_group *__sp_find_spg(int pid, int spg_id)
put_task_struct(tsk);
} else {
+ mutex_lock(&sp_mutex);
spg = idr_find(&sp_group_idr, spg_id);
+ mutex_unlock(&sp_mutex);
}
return spg;
@@ -392,6 +404,8 @@ static struct sp_group *find_or_alloc_sp_group(int spg_id)
INIT_LIST_HEAD(&spg->procs);
INIT_LIST_HEAD(&spg->spa_list);
+ init_rwsem(&spg->rw_lock);
+
ret = idr_alloc(&sp_group_idr, spg, spg_id, spg_id+1,
GFP_KERNEL);
if (ret < 0) {
@@ -422,9 +436,8 @@ static struct sp_group *find_or_alloc_sp_group(int spg_id)
goto out_fput;
}
} else {
- if (!spg_valid(spg))
+ if (!sp_group_get(spg))
return ERR_PTR(-ENODEV);
- atomic_inc(&spg->use_count);
}
return spg;
@@ -607,6 +620,8 @@ int sp_group_add_task(int pid, int spg_id)
}
mm->sp_group = spg;
+
+ down_write(&spg->rw_lock);
/* We reactive the spg even the spg exists already. */
spg->is_alive = true;
list_add_tail(&mm->sp_node, &spg->procs);
@@ -675,11 +690,14 @@ int sp_group_add_task(int pid, int spg_id)
mm->sp_group = NULL;
}
+ up_write(&spg->rw_lock);
out_drop_group:
if (unlikely(ret))
__sp_group_drop_locked(spg);
out_put_mm:
- mmput(mm);
+ /* No need to put the mm if the sp group add this mm success.*/
+ if (unlikely(ret))
+ mmput(mm);
out_put_task:
put_task_struct(tsk);
out_unlock:
@@ -712,44 +730,12 @@ static void spg_exit_unlock(bool unlock)
mutex_unlock(&sp_mutex);
}
-/*
- * Do cleanup when a process exits.
- */
-void sp_group_exit(struct mm_struct *mm)
-{
- bool is_alive = true;
- bool unlock;
-
- /*
- * Nothing to do if this thread group doesn't belong to any sp_group.
- * No need to protect this check with lock because we can add a task
- * to a group if !PF_EXITING.
- */
- if (!mm->sp_group)
- return;
-
- spg_exit_lock(&unlock);
- if (list_is_singular(&mm->sp_group->procs))
- is_alive = mm->sp_group->is_alive = false;
- list_del(&mm->sp_node);
- spg_exit_unlock(unlock);
-
- /*
- * To avoid calling this with sp_mutex held, we first mark the
- * sp_group as dead and then send the notification and then do
- * the real cleanup in sp_group_post_exit().
- */
- if (!is_alive)
- blocking_notifier_call_chain(&sp_notifier_chain, 0,
- mm->sp_group);
-}
-
void sp_group_post_exit(struct mm_struct *mm)
{
struct sp_proc_stat *stat;
bool unlock;
- if (!mm->sp_group)
+ if (!enable_ascend_share_pool || !mm->sp_group)
return;
spg_exit_lock(&unlock);
@@ -1139,8 +1125,6 @@ static void sp_munmap(struct mm_struct *mm, unsigned long addr,
{
int err;
- if (!mmget_not_zero(mm))
- return;
down_write(&mm->mmap_sem);
err = do_munmap(mm, addr, size, NULL);
@@ -1150,7 +1134,6 @@ static void sp_munmap(struct mm_struct *mm, unsigned long addr,
}
up_write(&mm->mmap_sem);
- mmput(mm);
}
/* The caller must hold sp_mutex. */
@@ -1183,8 +1166,6 @@ int sp_free(unsigned long addr)
check_interrupt_context();
- mutex_lock(&sp_mutex);
-
/*
* Access control: a share pool addr can only be freed by another task
* in the same spg or a kthread (such as buff_module_guard_work)
@@ -1217,6 +1198,8 @@ int sp_free(unsigned long addr)
sp_dump_stack();
+ down_read(&spa->spg->rw_lock);
+
__sp_free(spa->spg, spa->va_start, spa_size(spa), NULL);
/* Free the memory of the backing shmem or hugetlbfs */
@@ -1226,6 +1209,9 @@ int sp_free(unsigned long addr)
if (ret)
pr_err("share pool: sp free fallocate failed: %d\n", ret);
+ up_read(&spa->spg->rw_lock);
+
+ mutex_lock(&sp_mutex);
/* pointer stat may be invalid because of kthread buff_module_guard_work */
if (current->mm == NULL) {
kthread_stat.alloc_size -= spa->real_size;
@@ -1236,12 +1222,11 @@ int sp_free(unsigned long addr)
else
BUG();
}
+ mutex_unlock(&sp_mutex);
drop_spa:
__sp_area_drop(spa);
out:
- mutex_unlock(&sp_mutex);
-
sp_try_to_compact();
return ret;
}
@@ -1317,9 +1302,7 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
if (sp_flags & SP_HUGEPAGE_ONLY)
sp_flags |= SP_HUGEPAGE;
- mutex_lock(&sp_mutex);
spg = __sp_find_spg(current->pid, SPG_ID_DEFAULT);
- mutex_unlock(&sp_mutex);
if (!spg) { /* DVPP pass through scene: first call sp_alloc() */
/* mdc scene hack */
if (enable_mdc_default_group)
@@ -1336,14 +1319,16 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
ret);
return ERR_PTR(ret);
}
- mutex_lock(&sp_mutex);
spg = current->mm->sp_group;
} else { /* other scenes */
- mutex_lock(&sp_mutex);
if (spg_id != SPG_ID_DEFAULT) {
+ mutex_lock(&sp_mutex);
/* the caller should be a member of the sp group */
- if (spg != idr_find(&sp_group_idr, spg_id))
+ if (spg != idr_find(&sp_group_idr, spg_id)) {
+ mutex_unlock(&sp_mutex);
goto out;
+ }
+ mutex_unlock(&sp_mutex);
}
}
@@ -1352,6 +1337,7 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
goto out;
}
+ down_read(&spg->rw_lock);
if (sp_flags & SP_HUGEPAGE) {
file = spg->file_hugetlb;
size_aligned = ALIGN(size, PMD_SIZE);
@@ -1376,31 +1362,25 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
unsigned long populate = 0;
struct vm_area_struct *vma;
- if (!mmget_not_zero(mm))
- continue;
-
down_write(&mm->mmap_sem);
mmap_addr = sp_mmap(mm, file, spa, &populate);
if (IS_ERR_VALUE(mmap_addr)) {
up_write(&mm->mmap_sem);
p = (void *)mmap_addr;
__sp_free(spg, sp_addr, size_aligned, mm);
- mmput(mm);
pr_err("share pool: allocation sp mmap failed, ret %ld\n", mmap_addr);
goto out;
}
- p =(void *)mmap_addr; /* success */
+ p = (void *)mmap_addr; /* success */
if (populate == 0) {
up_write(&mm->mmap_sem);
- mmput(mm);
continue;
}
vma = find_vma(mm, sp_addr);
if (unlikely(!vma)) {
up_write(&mm->mmap_sem);
- mmput(mm);
pr_err("share pool: allocation failed due to find %pK vma failure\n",
(void *)sp_addr);
p = ERR_PTR(-EINVAL);
@@ -1461,24 +1441,22 @@ void *sp_alloc(unsigned long size, unsigned long sp_flags, int spg_id)
size_aligned = ALIGN(size, PAGE_SIZE);
sp_flags &= ~SP_HUGEPAGE;
__sp_area_drop(spa);
- mmput(mm);
goto try_again;
}
}
-
- mmput(mm);
break;
}
- mmput(mm);
}
+out:
+ up_read(&spg->rw_lock);
+
+ mutex_lock(&sp_mutex);
if (!IS_ERR(p)) {
stat = idr_find(&sp_stat_idr, current->mm->sp_stat_id);
if (stat)
stat->alloc_size += size_aligned;
}
-
-out:
mutex_unlock(&sp_mutex);
/* this will free spa if mmap failed */
@@ -1556,10 +1534,6 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
}
}
- if (!mmget_not_zero(mm)) {
- ret_addr = -ESPGMMEXIT;
- goto put_file;
- }
down_write(&mm->mmap_sem);
ret_addr = sp_mmap(mm, file, spa, &populate);
@@ -1604,8 +1578,7 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
put_mm:
up_write(&mm->mmap_sem);
- mmput(mm);
-put_file:
+
if (!spa->spg && file)
fput(file);
@@ -1769,10 +1742,12 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
*/
stat = sp_init_proc_stat(tsk, mm);
if (IS_ERR(stat)) {
+ mutex_unlock(&sp_mutex);
uva = stat;
pr_err("share pool: init proc stat failed, ret %lx\n", PTR_ERR(stat));
goto out_unlock;
}
+ mutex_unlock(&sp_mutex);
spg = __sp_find_spg(pid, SPG_ID_DEFAULT);
if (spg == NULL) {
@@ -1794,6 +1769,7 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
}
if (!vmalloc_area_set_flag(spa, kva_aligned, VM_SHAREPOOL)) {
+ up_read(&spg->rw_lock);
pr_err("share pool: %s: the kva %pK is not valid\n", __func__, (void *)kva_aligned);
goto out_drop_spa;
}
@@ -1808,12 +1784,14 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
goto out_unlock;
}
+ down_read(&spg->rw_lock);
if (enable_share_k2u_spg)
spa = sp_alloc_area(size_aligned, sp_flags, spg, SPA_TYPE_K2SPG);
else
spa = sp_alloc_area(size_aligned, sp_flags, NULL, SPA_TYPE_K2TASK);
if (IS_ERR(spa)) {
+ up_read(&spg->rw_lock);
if (printk_ratelimit())
pr_err("share pool: k2u(spg) failed due to alloc spa failure "
"(potential no enough virtual memory when -75): %ld\n",
@@ -1831,14 +1809,18 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
uva = sp_make_share_kva_to_spg(kva_aligned, spa, spg);
else
uva = sp_make_share_kva_to_task(kva_aligned, spa, mm);
+
+ up_read(&spg->rw_lock);
} else {
/* group is dead, return -ENODEV */
pr_err("share pool: failed to make k2u, sp group is dead\n");
}
if (!IS_ERR(uva)) {
+ mutex_lock(&sp_mutex);
uva = uva + (kva - kva_aligned);
stat->k2u_size += size_aligned;
+ mutex_unlock(&sp_mutex);
} else {
/* associate vma and spa */
if (!vmalloc_area_clr_flag(spa, kva_aligned, VM_SHAREPOOL))
@@ -1849,7 +1831,6 @@ void *sp_make_share_k2u(unsigned long kva, unsigned long size,
out_drop_spa:
__sp_area_drop(spa);
out_unlock:
- mutex_unlock(&sp_mutex);
mmput(mm);
out_put_task:
put_task_struct(tsk);
@@ -2144,7 +2125,6 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
unsigned int page_size;
struct sp_proc_stat *stat;
- mutex_lock(&sp_mutex);
/*
* at first we guess it's a hugepage addr
* we can tolerate at most PMD_SIZE or PAGE_SIZE which is matched in k2u
@@ -2157,7 +2137,7 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
if (printk_ratelimit())
pr_err("share pool: invalid input uva %pK in unshare uva\n",
(void *)uva);
- goto out_unlock;
+ goto out;
}
}
@@ -2259,10 +2239,14 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
goto out_drop_area;
}
+ down_read(&spa->spg->rw_lock);
__sp_free(spa->spg, uva_aligned, size_aligned, NULL);
+ up_read(&spa->spg->rw_lock);
}
sp_dump_stack();
+
+ mutex_lock(&sp_mutex);
/* pointer stat may be invalid because of kthread buff_module_guard_work */
if (current->mm == NULL) {
kthread_stat.k2u_size -= spa->real_size;
@@ -2273,6 +2257,7 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
else
WARN(1, "share_pool: %s: null process stat\n", __func__);
}
+ mutex_unlock(&sp_mutex);
out_clr_flag:
/* deassociate vma and spa */
@@ -2281,8 +2266,7 @@ static int sp_unshare_uva(unsigned long uva, unsigned long size, int pid, int sp
out_drop_area:
__sp_area_drop(spa);
-out_unlock:
- mutex_unlock(&sp_mutex);
+out:
return ret;
}
@@ -2446,7 +2430,7 @@ bool sp_config_dvpp_range(size_t start, size_t size, int device_id, int pid)
check_interrupt_context();
if (device_id < 0 || device_id >= MAX_DEVID || pid < 0 || size <= 0 ||
- size> MMAP_SHARE_POOL_16G_SIZE)
+ size > MMAP_SHARE_POOL_16G_SIZE)
return false;
mutex_lock(&sp_mutex);
@@ -2468,9 +2452,10 @@ EXPORT_SYMBOL_GPL(sp_config_dvpp_range);
/* Check whether the address belongs to the share pool. */
bool is_sharepool_addr(unsigned long addr)
{
- if (host_svm_sp_enable == false)
- return addr >= MMAP_SHARE_POOL_START && addr < (MMAP_SHARE_POOL_16G_START + MMAP_SHARE_POOL_16G_SIZE);
- return addr >= MMAP_SHARE_POOL_START && addr < MMAP_SHARE_POOL_END;
+ if (host_svm_sp_enable == false)
+ return addr >= MMAP_SHARE_POOL_START && addr < (MMAP_SHARE_POOL_16G_START + MMAP_SHARE_POOL_16G_SIZE);
+
+ return addr >= MMAP_SHARE_POOL_START && addr < MMAP_SHARE_POOL_END;
}
EXPORT_SYMBOL_GPL(is_sharepool_addr);
@@ -2515,7 +2500,8 @@ int proc_sp_group_state(struct seq_file *m, struct pid_namespace *ns,
return 0;
}
-static void rb_spa_stat_show(struct seq_file *seq) {
+static void rb_spa_stat_show(struct seq_file *seq)
+{
struct rb_node *node;
struct sp_area *spa;
@@ -2814,6 +2800,36 @@ vm_fault_t sharepool_no_page(struct mm_struct *mm,
}
EXPORT_SYMBOL(sharepool_no_page);
+#define MM_WOULD_FREE 2
+
+void sp_group_exit(struct mm_struct *mm)
+{
+ struct sp_group *spg = NULL;
+ bool is_alive = true, unlock;
+
+ if (!enable_ascend_share_pool)
+ return;
+
+ spg = mm->sp_group;
+
+ /* If the mm_users is 2, it means that the mm is ready to be freed
+ because the last owner of this mm is in exiting process.
+ */
+ if (spg_valid(spg) && atomic_read(&mm->mm_users) == MM_WOULD_FREE) {
+ spg_exit_lock(&unlock);
+ down_write(&spg->rw_lock);
+ if (list_is_singular(&spg->procs))
+ is_alive = spg->is_alive = false;
+ list_del(&mm->sp_node);
+ up_write(&spg->rw_lock);
+ if (!is_alive)
+ blocking_notifier_call_chain(&sp_notifier_chain, 0,
+ mm->sp_group);
+ atomic_dec(&mm->mm_users);
+ spg_exit_unlock(unlock);
+ }
+}
+
struct page *sp_alloc_pages(struct vm_struct *area, gfp_t mask,
unsigned int page_order, int node)
{
--
2.25.1
1
1

[PATCH kernel-4.19 1/5] share_pool: Remove redundant null pointer check
by Yang Yingliang 14 Jan '21
by Yang Yingliang 14 Jan '21
14 Jan '21
From: Tang Yizhou <tangyizhou(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: 46925
CVE: NA
-------------------------------------------------
__sp_area_drop_locked() checks null pointer of spa, so remove null pointer
checks before calling __sp_area_drop_locked().
Reported-by: Cui Bixuan <cuibixuan(a)huawei.com>
Signed-off-by: Tang Yizhou <tangyizhou(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/share_pool.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/mm/share_pool.c b/mm/share_pool.c
index 4316625defac..2cfac4642e0b 100644
--- a/mm/share_pool.c
+++ b/mm/share_pool.c
@@ -443,8 +443,7 @@ static void sp_munmap_task_areas(struct mm_struct *mm, struct list_head *stop)
if (&spa->link == stop)
break;
- if (prev)
- __sp_area_drop_locked(prev);
+ __sp_area_drop_locked(prev);
prev = spa;
atomic_inc(&spa->use_count);
@@ -459,8 +458,7 @@ static void sp_munmap_task_areas(struct mm_struct *mm, struct list_head *stop)
spin_lock(&sp_area_lock);
}
- if (prev)
- __sp_area_drop_locked(prev);
+ __sp_area_drop_locked(prev);
spin_unlock(&sp_area_lock);
}
@@ -607,8 +605,7 @@ int sp_group_add_task(int pid, int spg_id)
struct file *file = spa_file(spa);
unsigned long addr;
- if (prev)
- __sp_area_drop_locked(prev);
+ __sp_area_drop_locked(prev);
prev = spa;
atomic_inc(&spa->use_count);
@@ -651,8 +648,7 @@ int sp_group_add_task(int pid, int spg_id)
spin_lock(&sp_area_lock);
}
- if (prev)
- __sp_area_drop_locked(prev);
+ __sp_area_drop_locked(prev);
spin_unlock(&sp_area_lock);
if (unlikely(ret)) {
--
2.25.1
1
4
Adrian Hunter (1):
scsi: ufs-pci: Ensure UFS device is in PowerDown mode for
suspend-to-disk ->poweroff()
Alexey Dobriyan (2):
proc: change ->nlink under proc_subdir_lock
proc: fix lookup in /proc/net subdirectories after setns(2)
Antoine Tenart (4):
net-sysfs: take the rtnl lock when storing xps_cpus
net-sysfs: take the rtnl lock when accessing xps_cpus_map and num_tc
net-sysfs: take the rtnl lock when storing xps_rxqs
net-sysfs: take the rtnl lock when accessing xps_rxqs_map and num_tc
Ard Biesheuvel (1):
crypto: ecdh - avoid buffer overflow in ecdh_set_secret()
Arnd Bergmann (1):
usb: gadget: select CONFIG_CRC32
Bard Liao (1):
Revert "device property: Keep secondary firmware node secondary by
type"
Bart Van Assche (2):
scsi: ide: Do not set the RQF_PREEMPT flag for sense requests
scsi: scsi_transport_spi: Set RQF_PM for domain validation commands
Bean Huo (1):
scsi: ufs: Fix wrong print message in dev_err()
Bjørn Mork (2):
net: usb: qmi_wwan: add Quectel EM160R-GL
USB: serial: option: add Quectel EM160R-GL
Chandana Kishori Chiluveru (1):
usb: gadget: configfs: Preserve function ordering after bind failure
Christophe JAILLET (1):
staging: mt7621-dma: Fix a resource leak in an error handling path
Cong Wang (1):
erspan: fix version 1 check in gre_parse_header()
Dan Carpenter (1):
atm: idt77252: call pci_disable_device() on error path
Dan Williams (1):
x86/mm: Fix leak of pmd ptlock
Daniel Palmer (1):
USB: serial: option: add LongSung M5710 module support
David Disseldorp (1):
scsi: target: Fix XCOPY NAA identifier lookup
Dexuan Cui (1):
video: hyperv_fb: Fix the mmap() regression for v5.4.y and older
Dinghao Liu (1):
net: ethernet: Fix memleak in ethoc_probe
Dominique Martinet (1):
kbuild: don't hardcode depmod path
Eddie Hung (1):
usb: gadget: configfs: Fix use-after-free issue with udc_name
Filipe Manana (1):
btrfs: send: fix wrong file path when there is an inode with a pending
rmdir
Florian Fainelli (1):
net: systemport: set dev->max_mtu to UMAC_MAX_MTU_SIZE
Florian Westphal (1):
netfilter: xt_RATEEST: reject non-null terminated string from
userspace
Greg Kroah-Hartman (1):
Linux 4.19.167
Grygorii Strashko (1):
net: ethernet: ti: cpts: fix ethtool output when no ptp_clock
registered
Guillaume Nault (1):
ipv4: Ignore ECN bits for fib lookups in fib_compute_spec_dst()
Hans de Goede (1):
Bluetooth: revert: hci_h5: close serdev device and free hu in h5_close
Heiner Kallweit (1):
r8169: work around power-saving bug on some chip versions
Huang Shijie (1):
lib/genalloc: fix the overflow when size is too big
Jeff Dike (1):
virtio_net: Fix recursive call to cpus_read_lock()
Jerome Brunet (1):
usb: gadget: f_uac2: reset wMaxPacketSize
Johan Hovold (4):
USB: serial: iuu_phoenix: fix DMA from stack
USB: yurex: fix control-URB timeout handling
USB: usblp: fix DMA to stack
USB: serial: keyspan_pda: remove unused variable
John Wang (1):
net/ncsi: Use real net-device for response handler
Kailang Yang (1):
ALSA: hda/realtek - Fix speaker volume control on Lenovo C940
Linus Torvalds (1):
depmod: handle the case of /sbin/depmod without /sbin in PATH
Manish Chopra (1):
qede: fix offload for IPIP tunnel packets
Manish Narani (1):
usb: gadget: u_ether: Fix MTU size mismatch with RX packet size
Michael Grzeschik (1):
USB: xhci: fix U1/U2 handling for hardware with XHCI_INTEL_HOST quirk
set
Paolo Bonzini (1):
KVM: x86: fix shift out of bounds reported by UBSAN
Randy Dunlap (2):
net: sched: prevent invalid Scell_log shift count
usb: usbip: vhci_hcd: protect shift size
Rasmus Villemoes (2):
ethernet: ucc_geth: fix use-after-free in ucc_geth_remove()
ethernet: ucc_geth: set dev->max_mtu to 1518
Roger Pau Monne (1):
xen/pvh: correctly setup the PV EFI interface for dom0
Roland Dreier (1):
CDC-NCM: remove "connected" log message
Sean Young (1):
USB: cdc-acm: blacklist another IR Droid device
Serge Semin (1):
usb: dwc3: ulpi: Use VStsDone to detect PHY regs access completion
Sriharsha Allenki (1):
usb: gadget: Fix spinlock lockup on usb_function_deactivate
Stefan Chulski (3):
net: mvpp2: Add TCAM entry to drop flow control pause frames
net: mvpp2: prs: fix PPPoE with ipv6 packet parse
net: mvpp2: Fix GoP port 3 Networking Complex Control configurations
Subash Abhinov Kasiviswanathan (1):
netfilter: x_tables: Update remaining dereference to RCU
Sylwester Dziedziuch (1):
i40e: Fix Error I40E_AQ_RC_EINVAL when removing VFs
Takashi Iwai (2):
ALSA: usb-audio: Fix UBSAN warnings for MIDI jacks
ALSA: hda/via: Fix runtime PM for Clevo W35xSS
Tetsuo Handa (1):
USB: cdc-wdm: Fix use after free in service_outstanding_interrupt().
Thinh Nguyen (1):
usb: uas: Add PNY USB Portable SSD to unusual_uas
Vasily Averin (1):
netfilter: ipset: fix shift-out-of-bounds in htable_bits()
Xie He (1):
net: hdlc_ppp: Fix issues when mod_timer is called while timer is
running
Yang Yingliang (1):
USB: gadget: legacy: fix return error code in acm_ms_bind()
Ying-Tsun Huang (1):
x86/mtrr: Correct the range check before performing MTRR type lookups
Yu Kuai (1):
usb: chipidea: ci_hdrc_imx: add missing put_device() call in
usbmisc_get_init_data()
Yunfeng Ye (1):
workqueue: Kick a worker based on the actual activation of delayed
works
Yunjian Wang (3):
tun: fix return value when the number of iovs exceeds MAX_SKB_FRAGS
net: hns: fix return value check in __lb_other_process()
vhost_net: fix ubuf refcount incorrectly when sendmsg fails
Zqiang (1):
usb: gadget: function: printer: Fix a memory leak for interface
descriptor
bo liu (1):
ALSA: hda/conexant: add a new hda codec CX11970
taehyun.cho (1):
usb: gadget: enable super speed plus
Makefile | 4 +-
arch/x86/kernel/cpu/mtrr/generic.c | 6 +-
arch/x86/kvm/mmu.h | 2 +-
arch/x86/mm/pgtable.c | 2 +
arch/x86/xen/efi.c | 12 +-
arch/x86/xen/enlighten_pv.c | 2 +-
arch/x86/xen/enlighten_pvh.c | 4 +
arch/x86/xen/xen-ops.h | 4 +-
crypto/ecdh.c | 3 +-
drivers/atm/idt77252.c | 2 +-
drivers/base/core.c | 2 +-
drivers/bluetooth/hci_h5.c | 8 +-
drivers/ide/ide-atapi.c | 1 -
drivers/ide/ide-io.c | 5 -
drivers/net/ethernet/broadcom/bcmsysport.c | 1 +
drivers/net/ethernet/ethoc.c | 3 +-
drivers/net/ethernet/freescale/ucc_geth.c | 3 +-
.../net/ethernet/hisilicon/hns/hns_ethtool.c | 4 +
drivers/net/ethernet/intel/i40e/i40e.h | 3 +
drivers/net/ethernet/intel/i40e/i40e_main.c | 10 ++
.../ethernet/intel/i40e/i40e_virtchnl_pf.c | 4 +-
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +-
.../net/ethernet/marvell/mvpp2/mvpp2_prs.c | 38 +++++-
.../net/ethernet/marvell/mvpp2/mvpp2_prs.h | 2 +-
drivers/net/ethernet/qlogic/qede/qede_fp.c | 5 +
drivers/net/ethernet/realtek/r8169.c | 6 +-
drivers/net/ethernet/ti/cpts.c | 2 +
drivers/net/tun.c | 2 +-
drivers/net/usb/cdc_ncm.c | 3 -
drivers/net/usb/qmi_wwan.c | 1 +
drivers/net/virtio_net.c | 12 +-
drivers/net/wan/hdlc_ppp.c | 7 ++
drivers/scsi/scsi_transport_spi.c | 27 ++--
drivers/scsi/ufs/ufshcd-pci.c | 34 ++++-
drivers/scsi/ufs/ufshcd.c | 2 +-
drivers/staging/mt7621-dma/mtk-hsdma.c | 4 +-
drivers/target/target_core_xcopy.c | 119 ++++++++++--------
drivers/target/target_core_xcopy.h | 1 +
drivers/usb/chipidea/ci_hdrc_imx.c | 6 +-
drivers/usb/class/cdc-acm.c | 4 +
drivers/usb/class/cdc-wdm.c | 16 ++-
drivers/usb/class/usblp.c | 21 +++-
drivers/usb/dwc3/core.h | 1 +
drivers/usb/dwc3/ulpi.c | 2 +-
drivers/usb/gadget/Kconfig | 2 +
drivers/usb/gadget/composite.c | 10 +-
drivers/usb/gadget/configfs.c | 19 ++-
drivers/usb/gadget/function/f_printer.c | 1 +
drivers/usb/gadget/function/f_uac2.c | 69 +++++++---
drivers/usb/gadget/function/u_ether.c | 9 +-
drivers/usb/gadget/legacy/acm_ms.c | 4 +-
drivers/usb/host/xhci.c | 24 ++--
drivers/usb/misc/yurex.c | 3 +
drivers/usb/serial/iuu_phoenix.c | 20 ++-
drivers/usb/serial/keyspan_pda.c | 2 -
drivers/usb/serial/option.c | 3 +
drivers/usb/storage/unusual_uas.h | 7 ++
drivers/usb/usbip/vhci_hcd.c | 2 +
drivers/vhost/net.c | 6 +-
drivers/video/fbdev/hyperv_fb.c | 6 +-
fs/btrfs/send.c | 49 +++++---
fs/proc/generic.c | 55 +++++---
fs/proc/internal.h | 7 ++
fs/proc/proc_net.c | 16 ---
include/linux/proc_fs.h | 8 +-
include/net/red.h | 4 +-
kernel/workqueue.c | 13 +-
lib/genalloc.c | 25 ++--
net/core/net-sysfs.c | 65 ++++++++--
net/ipv4/fib_frontend.c | 2 +-
net/ipv4/gre_demux.c | 2 +-
net/ipv4/netfilter/arp_tables.c | 2 +-
net/ipv4/netfilter/ip_tables.c | 2 +-
net/ipv6/netfilter/ip6_tables.c | 2 +-
net/ncsi/ncsi-rsp.c | 2 +-
net/netfilter/ipset/ip_set_hash_gen.h | 20 +--
net/netfilter/xt_RATEEST.c | 3 +
net/sched/sch_choke.c | 2 +-
net/sched/sch_gred.c | 2 +-
net/sched/sch_red.c | 2 +-
net/sched/sch_sfq.c | 2 +-
scripts/depmod.sh | 2 +
sound/pci/hda/hda_intel.c | 2 -
sound/pci/hda/patch_conexant.c | 1 +
sound/pci/hda/patch_realtek.c | 6 +
sound/pci/hda/patch_via.c | 13 ++
sound/usb/midi.c | 4 +
87 files changed, 624 insertions(+), 278 deletions(-)
--
2.25.1
1
78
Felix Fietkau (1):
Revert "mtd: spinand: Fix OOB read"
Greg Kroah-Hartman (1):
Linux 4.19.166
Jonathan Cameron (2):
iio:imu:bmi160: Fix alignment and data leak issues
iio:magnetometer:mag3110: Fix alignment and data leak issues.
Josh Poimboeuf (1):
kdev_t: always inline major/minor helper functions
Tudor Ambarus (1):
dmaengine: at_hdmac: Substitute kzalloc with kmalloc
Yu Kuai (2):
dmaengine: at_hdmac: add missing put_device() call in at_dma_xlate()
dmaengine: at_hdmac: add missing kfree() call in at_dma_xlate()
Makefile | 2 +-
drivers/dma/at_hdmac.c | 11 ++++++++---
drivers/iio/imu/bmi160/bmi160_core.c | 13 +++++++++----
drivers/iio/magnetometer/mag3110.c | 13 +++++++++----
drivers/mtd/nand/spi/core.c | 4 ----
include/linux/kdev_t.h | 22 +++++++++++-----------
6 files changed, 38 insertions(+), 27 deletions(-)
--
2.25.1
1
8

11 Jan '21
From: Fang Lijun <fanglijun3(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
The vm_flags will changed by MAP_CHECKNODE,
so we must use it for output argument.
Fixes: 66bd45db2b03 ("arm64/ascend: mm: Fix arm32 compile warnings")
Signed-off-by: Fang Lijun <fanglijun3(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/mman.h | 7 ++++---
mm/mmap.c | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/mman.h b/include/linux/mman.h
index d35d984c058c..a8ea591faed7 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -76,15 +76,16 @@ static inline int dvpp_mmap_zone(unsigned long addr) { return 0; }
#ifdef CONFIG_COHERENT_DEVICE
#define CHECKNODE_BITS 48
#define CHECKNODE_MASK (~((_AC(1, UL) << CHECKNODE_BITS) - 1))
-static inline void set_vm_checknode(vm_flags_t vm_flags, unsigned long flags)
+static inline void set_vm_checknode(vm_flags_t *vm_flags, unsigned long flags)
{
if (is_set_cdmmask())
- vm_flags |= VM_CHECKNODE | ((((flags >> MAP_HUGE_SHIFT) &
+ *vm_flags |= VM_CHECKNODE | ((((flags >> MAP_HUGE_SHIFT) &
MAP_HUGE_MASK) << CHECKNODE_BITS) & CHECKNODE_MASK);
}
#else
#define CHECKNODE_BITS (0)
-static inline void set_vm_checknode(vm_flags_t vm_flags, unsigned long flags) {}
+static inline void set_vm_checknode(vm_flags_t *vm_flags, unsigned long flags)
+{}
#endif
/*
diff --git a/mm/mmap.c b/mm/mmap.c
index 9dfef56dd0e8..e0399b087430 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1579,7 +1579,7 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
* hugetlbfs file mmap will use it to check node
*/
if (flags & MAP_CHECKNODE)
- set_vm_checknode(vm_flags, flags);
+ set_vm_checknode(&vm_flags, flags);
addr = __mmap_region(mm, file, addr, len, vm_flags, pgoff, uf);
if (!IS_ERR_VALUE(addr) &&
--
2.25.1
1
4
2
1

[PATCH openEuler-1.0-LTS] HID: core: Sanitize event code and type when mapping input
by Yang Yingliang 08 Jan '21
by Yang Yingliang 08 Jan '21
08 Jan '21
From: Marc Zyngier <maz(a)kernel.org>
stable inclusion
from linux-4.19.144
commit a47b8511d90528c77346597e2012100dfc28cd8c
CVE: CVE-2020-0465
--------------------------------
commit 35556bed836f8dc07ac55f69c8d17dce3e7f0e25 upstream.
When calling into hid_map_usage(), the passed event code is
blindly stored as is, even if it doesn't fit in the associated bitmap.
This event code can come from a variety of sources, including devices
masquerading as input devices, only a bit more "programmable".
Instead of taking the event code at face value, check that it actually
fits the corresponding bitmap, and if it doesn't:
- spit out a warning so that we know which device is acting up
- NULLify the bitmap pointer so that we catch unexpected uses
Code paths that can make use of untrusted inputs can now check
that the mapping was indeed correct and bail out if not.
Cc: stable(a)vger.kernel.org
Signed-off-by: Marc Zyngier <maz(a)kernel.org>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/hid/hid-input.c | 4 ++++
drivers/hid/hid-multitouch.c | 2 ++
include/linux/hid.h | 42 +++++++++++++++++++++++++-----------
3 files changed, 35 insertions(+), 13 deletions(-)
diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
index dbb0cbe65fc9..0062b37ef98f 100644
--- a/drivers/hid/hid-input.c
+++ b/drivers/hid/hid-input.c
@@ -1125,6 +1125,10 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
}
mapped:
+ /* Mapping failed, bail out */
+ if (!bit)
+ return;
+
if (device->driver->input_mapped &&
device->driver->input_mapped(device, hidinput, field, usage,
&bit, &max) < 0) {
diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
index f9167d0e095c..dfb2548e0052 100644
--- a/drivers/hid/hid-multitouch.c
+++ b/drivers/hid/hid-multitouch.c
@@ -841,6 +841,8 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
code = BTN_0 + ((usage->hid - 1) & HID_USAGE);
hid_map_usage(hi, usage, bit, max, EV_KEY, code);
+ if (!*bit)
+ return -1;
input_set_capability(hi->input, EV_KEY, code);
return 1;
diff --git a/include/linux/hid.h b/include/linux/hid.h
index 8b3e5e8a72fb..bbbe6c0e0e26 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -956,34 +956,49 @@ static inline void hid_device_io_stop(struct hid_device *hid) {
* @max: maximal valid usage->code to consider later (out parameter)
* @type: input event type (EV_KEY, EV_REL, ...)
* @c: code which corresponds to this usage and type
+ *
+ * The value pointed to by @bit will be set to NULL if either @type is
+ * an unhandled event type, or if @c is out of range for @type. This
+ * can be used as an error condition.
*/
static inline void hid_map_usage(struct hid_input *hidinput,
struct hid_usage *usage, unsigned long **bit, int *max,
- __u8 type, __u16 c)
+ __u8 type, unsigned int c)
{
struct input_dev *input = hidinput->input;
-
- usage->type = type;
- usage->code = c;
+ unsigned long *bmap = NULL;
+ unsigned int limit = 0;
switch (type) {
case EV_ABS:
- *bit = input->absbit;
- *max = ABS_MAX;
+ bmap = input->absbit;
+ limit = ABS_MAX;
break;
case EV_REL:
- *bit = input->relbit;
- *max = REL_MAX;
+ bmap = input->relbit;
+ limit = REL_MAX;
break;
case EV_KEY:
- *bit = input->keybit;
- *max = KEY_MAX;
+ bmap = input->keybit;
+ limit = KEY_MAX;
break;
case EV_LED:
- *bit = input->ledbit;
- *max = LED_MAX;
+ bmap = input->ledbit;
+ limit = LED_MAX;
break;
}
+
+ if (unlikely(c > limit || !bmap)) {
+ pr_warn_ratelimited("%s: Invalid code %d type %d\n",
+ input->name, c, type);
+ *bit = NULL;
+ return;
+ }
+
+ usage->type = type;
+ usage->code = c;
+ *max = limit;
+ *bit = bmap;
}
/**
@@ -997,7 +1012,8 @@ static inline void hid_map_usage_clear(struct hid_input *hidinput,
__u8 type, __u16 c)
{
hid_map_usage(hidinput, usage, bit, max, type, c);
- clear_bit(c, *bit);
+ if (*bit)
+ clear_bit(usage->code, *bit);
}
/**
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] cfg80211: add missing policy for NL80211_ATTR_STATUS_CODE
by Yang Yingliang 08 Jan '21
by Yang Yingliang 08 Jan '21
08 Jan '21
From: Sergey Matyukevich <sergey.matyukevich.os(a)quantenna.com>
stable inclusion
from linux-4.19.108
commit 0fb31bd53a5e27394916758173eb748c5e0dbd47
CVE: CVE-2020-27068
--------------------------------
[ Upstream commit ea75080110a4c1fa011b0a73cb8f42227143ee3e ]
The nl80211_policy is missing for NL80211_ATTR_STATUS_CODE attribute.
As a result, for strictly validated commands, it's assumed to not be
supported.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os(a)quantenna.com>
Link: https://lore.kernel.org/r/20200213131608.10541-2-sergey.matyukevich.os@quan…
Signed-off-by: Johannes Berg <johannes.berg(a)intel.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/wireless/nl80211.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 5075fd293feb..de9580f13914 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -323,6 +323,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
[NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT] = { .type = NLA_FLAG },
[NL80211_ATTR_CONTROL_PORT_OVER_NL80211] = { .type = NLA_FLAG },
[NL80211_ATTR_PRIVACY] = { .type = NLA_FLAG },
+ [NL80211_ATTR_STATUS_CODE] = { .type = NLA_U16 },
[NL80211_ATTR_CIPHER_SUITE_GROUP] = { .type = NLA_U32 },
[NL80211_ATTR_WPA_VERSIONS] = { .type = NLA_U32 },
[NL80211_ATTR_PID] = { .type = NLA_U32 },
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] speakup: Reject setting the speakup line discipline outside of speakup
by Yang Yingliang 08 Jan '21
by Yang Yingliang 08 Jan '21
08 Jan '21
From: Samuel Thibault <samuel.thibault(a)ens-lyon.org>
mainline inclusion
from mainline-v5.10-rc7
commit f0992098cadb4c9c6a00703b66cafe604e178fea
category: bugfix
bugzilla: NA
CVE: CVE-2020-27830
--------------------------------
Speakup exposing a line discipline allows userland to try to use it,
while it is deemed to be useless, and thus uselessly exposes potential
bugs. One of them is simply that in such a case if the line sends data,
spk_ttyio_receive_buf2 is called and crashes since spk_ttyio_synth
is NULL.
This change restricts the use of the speakup line discipline to
speakup drivers, thus avoiding such kind of issues altogether.
Cc: stable(a)vger.kernel.org
Reported-by: Shisong Qin <qinshisong1205(a)gmail.com>
Signed-off-by: Samuel Thibault <samuel.thibault(a)ens-lyon.org>
Tested-by: Shisong Qin <qinshisong1205(a)gmail.com>
Link: https://lore.kernel.org/r/20201129193523.hm3f6n5xrn6fiyyc@function
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/accessibility/speakup/spk_ttyio.c
[yyl: spk_ttyio.c is in drivers/staging/speakup/ in kernel-4.19]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/staging/speakup/spk_ttyio.c | 36 ++++++++++++++++++-----------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/drivers/staging/speakup/spk_ttyio.c b/drivers/staging/speakup/spk_ttyio.c
index 6c754ddf1257..8bc7db55daeb 100644
--- a/drivers/staging/speakup/spk_ttyio.c
+++ b/drivers/staging/speakup/spk_ttyio.c
@@ -47,27 +47,21 @@ static int spk_ttyio_ldisc_open(struct tty_struct *tty)
{
struct spk_ldisc_data *ldisc_data;
+ if (tty != speakup_tty)
+ /* Somebody tried to use this line discipline outside speakup */
+ return -ENODEV;
+
if (tty->ops->write == NULL)
return -EOPNOTSUPP;
- mutex_lock(&speakup_tty_mutex);
- if (speakup_tty) {
- mutex_unlock(&speakup_tty_mutex);
- return -EBUSY;
- }
- speakup_tty = tty;
ldisc_data = kmalloc(sizeof(struct spk_ldisc_data), GFP_KERNEL);
- if (!ldisc_data) {
- speakup_tty = NULL;
- mutex_unlock(&speakup_tty_mutex);
+ if (!ldisc_data)
return -ENOMEM;
- }
sema_init(&ldisc_data->sem, 0);
ldisc_data->buf_free = true;
- speakup_tty->disc_data = ldisc_data;
- mutex_unlock(&speakup_tty_mutex);
+ tty->disc_data = ldisc_data;
return 0;
}
@@ -187,9 +181,25 @@ static int spk_ttyio_initialise_ldisc(struct spk_synth *synth)
tty_unlock(tty);
+ mutex_lock(&speakup_tty_mutex);
+ speakup_tty = tty;
ret = tty_set_ldisc(tty, N_SPEAKUP);
if (ret)
- pr_err("speakup: Failed to set N_SPEAKUP on tty\n");
+ speakup_tty = NULL;
+ mutex_unlock(&speakup_tty_mutex);
+
+ if (!ret)
+ /* Success */
+ return 0;
+
+ pr_err("speakup: Failed to set N_SPEAKUP on tty\n");
+
+ tty_lock(tty);
+ if (tty->ops->close)
+ tty->ops->close(tty, NULL);
+ tty_unlock(tty);
+
+ tty_kclose(tty);
return ret;
}
--
2.25.1
1
0

08 Jan '21
From: Jann Horn <jannh(a)google.com>
mainline inclusion
from mainline-v5.10-rc7
commit 54ffccbf053b5b6ca4f6e45094b942fab92a25fc
category: bugfix
bugzilla: NA
CVE: CVE-2020-29661
--------------------------------
tiocspgrp() takes two tty_struct pointers: One to the tty that userspace
passed to ioctl() (`tty`) and one to the TTY being changed (`real_tty`).
These pointers are different when ioctl() is called with a master fd.
To properly lock real_tty->pgrp, we must take real_tty->ctrl_lock.
This bug makes it possible for racing ioctl(TIOCSPGRP, ...) calls on
both sides of a PTY pair to corrupt the refcount of `struct pid`,
leading to use-after-free errors.
Fixes: 47f86834bbd4 ("redo locking of tty->pgrp")
CC: stable(a)kernel.org
Signed-off-by: Jann Horn <jannh(a)google.com>
Reviewed-by: Jiri Slaby <jirislaby(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/tty/tty_jobctrl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/tty_jobctrl.c b/drivers/tty/tty_jobctrl.c
index c4ecd66fafef..a42dec3c95d0 100644
--- a/drivers/tty/tty_jobctrl.c
+++ b/drivers/tty/tty_jobctrl.c
@@ -494,10 +494,10 @@ static int tiocspgrp(struct tty_struct *tty, struct tty_struct *real_tty, pid_t
if (session_of_pgrp(pgrp) != task_session(current))
goto out_unlock;
retval = 0;
- spin_lock_irq(&tty->ctrl_lock);
+ spin_lock_irq(&real_tty->ctrl_lock);
put_pid(real_tty->pgrp);
real_tty->pgrp = get_pid(pgrp);
- spin_unlock_irq(&tty->ctrl_lock);
+ spin_unlock_irq(&real_tty->ctrl_lock);
out_unlock:
rcu_read_unlock();
return retval;
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] ALSA: rawmidi: Fix racy buffer resize under concurrent accesses
by Yang Yingliang 08 Jan '21
by Yang Yingliang 08 Jan '21
08 Jan '21
From: Takashi Iwai <tiwai(a)suse.de>
stable inclusion
from linux-4.19.124
commit a507658fdb2ad8ca282b0eb42f2a40b805deb1e6
CVE: CVE-2020-27786
--------------------------------
commit c1f6e3c818dd734c30f6a7eeebf232ba2cf3181d upstream.
The rawmidi core allows user to resize the runtime buffer via ioctl,
and this may lead to UAF when performed during concurrent reads or
writes: the read/write functions unlock the runtime lock temporarily
during copying form/to user-space, and that's the race window.
This patch fixes the hole by introducing a reference counter for the
runtime buffer read/write access and returns -EBUSY error when the
resize is performed concurrently against read/write.
Note that the ref count field is a simple integer instead of
refcount_t here, since the all contexts accessing the buffer is
basically protected with a spinlock, hence we need no expensive atomic
ops. Also, note that this busy check is needed only against read /
write functions, and not in receive/transmit callbacks; the race can
happen only at the spinlock hole mentioned in the above, while the
whole function is protected for receive / transmit callbacks.
Reported-by: butt3rflyh4ck <butterflyhuangxx(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/r/CAFcO6XMWpUVK_yzzCpp8_XP7+=oUpQvuBeCbMffEDkpe8jWr…
Link: https://lore.kernel.org/r/s5heerw3r5z.wl-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/sound/rawmidi.h | 1 +
sound/core/rawmidi.c | 31 +++++++++++++++++++++++++++----
2 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/include/sound/rawmidi.h b/include/sound/rawmidi.h
index 6665cb29e1a2..7a908a81cef4 100644
--- a/include/sound/rawmidi.h
+++ b/include/sound/rawmidi.h
@@ -76,6 +76,7 @@ struct snd_rawmidi_runtime {
size_t avail_min; /* min avail for wakeup */
size_t avail; /* max used buffer for wakeup */
size_t xruns; /* over/underruns counter */
+ int buffer_ref; /* buffer reference count */
/* misc */
spinlock_t lock;
wait_queue_head_t sleep;
diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
index a52d6d16efc4..9b26973fe697 100644
--- a/sound/core/rawmidi.c
+++ b/sound/core/rawmidi.c
@@ -112,6 +112,17 @@ static void snd_rawmidi_input_event_work(struct work_struct *work)
runtime->event(runtime->substream);
}
+/* buffer refcount management: call with runtime->lock held */
+static inline void snd_rawmidi_buffer_ref(struct snd_rawmidi_runtime *runtime)
+{
+ runtime->buffer_ref++;
+}
+
+static inline void snd_rawmidi_buffer_unref(struct snd_rawmidi_runtime *runtime)
+{
+ runtime->buffer_ref--;
+}
+
static int snd_rawmidi_runtime_create(struct snd_rawmidi_substream *substream)
{
struct snd_rawmidi_runtime *runtime;
@@ -661,6 +672,11 @@ static int resize_runtime_buffer(struct snd_rawmidi_runtime *runtime,
if (!newbuf)
return -ENOMEM;
spin_lock_irq(&runtime->lock);
+ if (runtime->buffer_ref) {
+ spin_unlock_irq(&runtime->lock);
+ kvfree(newbuf);
+ return -EBUSY;
+ }
oldbuf = runtime->buffer;
runtime->buffer = newbuf;
runtime->buffer_size = params->buffer_size;
@@ -960,8 +976,10 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
long result = 0, count1;
struct snd_rawmidi_runtime *runtime = substream->runtime;
unsigned long appl_ptr;
+ int err = 0;
spin_lock_irqsave(&runtime->lock, flags);
+ snd_rawmidi_buffer_ref(runtime);
while (count > 0 && runtime->avail) {
count1 = runtime->buffer_size - runtime->appl_ptr;
if (count1 > count)
@@ -980,16 +998,19 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
if (userbuf) {
spin_unlock_irqrestore(&runtime->lock, flags);
if (copy_to_user(userbuf + result,
- runtime->buffer + appl_ptr, count1)) {
- return result > 0 ? result : -EFAULT;
- }
+ runtime->buffer + appl_ptr, count1))
+ err = -EFAULT;
spin_lock_irqsave(&runtime->lock, flags);
+ if (err)
+ goto out;
}
result += count1;
count -= count1;
}
+ out:
+ snd_rawmidi_buffer_unref(runtime);
spin_unlock_irqrestore(&runtime->lock, flags);
- return result;
+ return result > 0 ? result : err;
}
long snd_rawmidi_kernel_read(struct snd_rawmidi_substream *substream,
@@ -1261,6 +1282,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
return -EAGAIN;
}
}
+ snd_rawmidi_buffer_ref(runtime);
while (count > 0 && runtime->avail > 0) {
count1 = runtime->buffer_size - runtime->appl_ptr;
if (count1 > count)
@@ -1292,6 +1314,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
}
__end:
count1 = runtime->avail < runtime->buffer_size;
+ snd_rawmidi_buffer_unref(runtime);
spin_unlock_irqrestore(&runtime->lock, flags);
if (count1)
snd_rawmidi_output_trigger(substream, 1);
--
2.25.1
1
0
hulk inclusion
category: bugfix
bugzilla: 46923
CVE: NA
----------------------------------------------
Enable KTASK in vfio, if the BAR size of some straight through
equipment device is too large, it will cause guest crash on booting.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
init/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/init/Kconfig b/init/Kconfig
index 6880b55901bb..71b09d998413 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -352,7 +352,7 @@ config AUDIT_TREE
config KTASK
bool "Multithread CPU-intensive kernel work"
depends on SMP
- default y
+ default n
help
Parallelize CPU-intensive kernel work. This feature is designed for
big machines that can take advantage of their extra CPUs to speed up
--
2.25.1
1
4