From: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
mainline inclusion
from mainline-v5.13
commit ea6d0630100b285f059d0a8d8e86f38a46407536
category: bugfix
bugzilla: 188222, https://gitee.com/openeuler/kernel/issues/I69SHF
CVE: NA
--------------------------------
Currently me_huge_page() temporary unlocks page to perform some actions
then locks it again later. My testcase (which calls hard-offline on
some tail page in a hugetlb, then accesses the address of the hugetlb
range) showed that page allocation code detects this page lock on buddy
page and printed out "BUG: Bad page state" message.
check_new_page_bad() does not consider a page with __PG_HWPOISON as bad
page, so this flag works as kind of filter, but this filtering doesn't
work in this case because the "bad page" is not the actual hwpoisoned
page. So stop locking page again. Actions to be taken depend on the
page type of the error, so page unlocking should be done in ->action()
callbacks. So let's make it assumed and change all existing callbacks
that way.
Link: https://lkml.kernel.org/r/20210609072029.74645-1-nao.horiguchi@gmail.com
Fixes: commit 78bb920344b8 ("mm: hwpoison: dissolve in-use hugepage in unrecoverable memory error")
Signed-off-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Cc: Oscar Salvador <osalvador(a)suse.de>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar(a)linux.vnet.ibm.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com>
---
mm/memory-failure.c | 44 ++++++++++++++++++++++++++++++--------------
1 file changed, 30 insertions(+), 14 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 55c175f57223..9a816fdf812d 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -662,6 +662,7 @@ static int truncate_error_page(struct page *p, unsigned long pfn,
*/
static int me_kernel(struct page *p, unsigned long pfn)
{
+ unlock_page(p);
return MF_IGNORED;
}
@@ -671,6 +672,7 @@ static int me_kernel(struct page *p, unsigned long pfn)
static int me_unknown(struct page *p, unsigned long pfn)
{
pr_err("Memory failure: %#lx: Unknown page state\n", pfn);
+ unlock_page(p);
return MF_FAILED;
}
@@ -679,6 +681,7 @@ static int me_unknown(struct page *p, unsigned long pfn)
*/
static int me_pagecache_clean(struct page *p, unsigned long pfn)
{
+ int ret;
struct address_space *mapping;
delete_from_lru_cache(p);
@@ -687,8 +690,10 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
* For anonymous pages we're done the only reference left
* should be the one m_f() holds.
*/
- if (PageAnon(p))
- return MF_RECOVERED;
+ if (PageAnon(p)) {
+ ret = MF_RECOVERED;
+ goto out;
+ }
/*
* Now truncate the page in the page cache. This is really
@@ -702,7 +707,8 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
/*
* Page has been teared down in the meanwhile
*/
- return MF_FAILED;
+ ret = MF_FAILED;
+ goto out;
}
/*
@@ -710,7 +716,10 @@ static int me_pagecache_clean(struct page *p, unsigned long pfn)
*
* Open: to take i_mutex or not for this? Right now we don't.
*/
- return truncate_error_page(p, pfn, mapping);
+ ret = truncate_error_page(p, pfn, mapping);
+out:
+ unlock_page(p);
+ return ret;
}
/*
@@ -786,24 +795,26 @@ static int me_pagecache_dirty(struct page *p, unsigned long pfn)
*/
static int me_swapcache_dirty(struct page *p, unsigned long pfn)
{
+ int ret;
+
ClearPageDirty(p);
/* Trigger EIO in shmem: */
ClearPageUptodate(p);
- if (!delete_from_lru_cache(p))
- return MF_DELAYED;
- else
- return MF_FAILED;
+ ret = delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED;
+ unlock_page(p);
+ return ret;
}
static int me_swapcache_clean(struct page *p, unsigned long pfn)
{
+ int ret;
+
delete_from_swap_cache(p);
- if (!delete_from_lru_cache(p))
- return MF_RECOVERED;
- else
- return MF_FAILED;
+ ret = delete_from_lru_cache(p) ? MF_FAILED : MF_RECOVERED;
+ unlock_page(p);
+ return ret;
}
/*
@@ -824,6 +835,7 @@ static int me_huge_page(struct page *p, unsigned long pfn)
mapping = page_mapping(hpage);
if (mapping) {
res = truncate_error_page(hpage, pfn, mapping);
+ unlock_page(hpage);
} else {
res = MF_FAILED;
unlock_page(hpage);
@@ -838,7 +850,6 @@ static int me_huge_page(struct page *p, unsigned long pfn)
page_ref_inc(p);
res = MF_RECOVERED;
}
- lock_page(hpage);
}
return res;
@@ -871,6 +882,8 @@ static struct page_state {
unsigned long mask;
unsigned long res;
enum mf_action_page_type type;
+
+ /* Callback ->action() has to unlock the relevant page inside it. */
int (*action)(struct page *p, unsigned long pfn);
} error_states[] = {
{ reserved, reserved, MF_MSG_KERNEL, me_kernel },
@@ -935,6 +948,7 @@ static int page_action(struct page_state *ps, struct page *p,
int result;
int count;
+ /* page p should be unlocked after returning from ps->action(). */
result = ps->action(p, pfn);
count = page_count(p) - 1;
@@ -1235,7 +1249,7 @@ static int memory_failure_hugetlb(unsigned long pfn, int flags)
goto out;
}
- res = identify_page_state(pfn, p, page_flags);
+ return identify_page_state(pfn, p, page_flags);
out:
unlock_page(head);
return res;
@@ -1533,6 +1547,8 @@ int memory_failure(unsigned long pfn, int flags)
identify_page_state:
res = identify_page_state(pfn, p, page_flags);
+ mutex_unlock(&mf_mutex);
+ return res;
unlock_page:
unlock_page(p);
unlock_mutex:
--
2.25.1
From: Alan Stern <stern(a)rowland.harvard.edu>
mainline inclusion
from mainline-v6.0-rc4
commit 9c6d778800b921bde3bff3cff5003d1650f942d1
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I675RE
CVE: CVE-2022-4662
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Automatic kernel fuzzing revealed a recursive locking violation in
usb-storage:
============================================
WARNING: possible recursive locking detected
5.18.0 #3 Not tainted
--------------------------------------------
kworker/1:3/1205 is trying to acquire lock:
ffff888018638db8 (&us_interface_key[i]){+.+.}-{3:3}, at:
usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
but task is already holding lock:
ffff888018638db8 (&us_interface_key[i]){+.+.}-{3:3}, at:
usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
...
stack backtrace:
CPU: 1 PID: 1205 Comm: kworker/1:3 Not tainted 5.18.0 #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.13.0-1ubuntu1.1 04/01/2014
Workqueue: usb_hub_wq hub_event
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2988 [inline]
check_deadlock kernel/locking/lockdep.c:3031 [inline]
validate_chain kernel/locking/lockdep.c:3816 [inline]
__lock_acquire.cold+0x152/0x3ca kernel/locking/lockdep.c:5053
lock_acquire kernel/locking/lockdep.c:5665 [inline]
lock_acquire+0x1ab/0x520 kernel/locking/lockdep.c:5630
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x14f/0x1610 kernel/locking/mutex.c:747
usb_stor_pre_reset+0x35/0x40 drivers/usb/storage/usb.c:230
usb_reset_device+0x37d/0x9a0 drivers/usb/core/hub.c:6109
r871xu_dev_remove+0x21a/0x270 drivers/staging/rtl8712/usb_intf.c:622
usb_unbind_interface+0x1bd/0x890 drivers/usb/core/driver.c:458
device_remove drivers/base/dd.c:545 [inline]
device_remove+0x11f/0x170 drivers/base/dd.c:537
__device_release_driver drivers/base/dd.c:1222 [inline]
device_release_driver_internal+0x1a7/0x2f0 drivers/base/dd.c:1248
usb_driver_release_interface+0x102/0x180 drivers/usb/core/driver.c:627
usb_forced_unbind_intf+0x4d/0xa0 drivers/usb/core/driver.c:1118
usb_reset_device+0x39b/0x9a0 drivers/usb/core/hub.c:6114
This turned out not to be an error in usb-storage but rather a nested
device reset attempt. That is, as the rtl8712 driver was being
unbound from a composite device in preparation for an unrelated USB
reset (that driver does not have pre_reset or post_reset callbacks),
its ->remove routine called usb_reset_device() -- thus nesting one
reset call within another.
Performing a reset as part of disconnect processing is a questionable
practice at best. However, the bug report points out that the USB
core does not have any protection against nested resets. Adding a
reset_in_progress flag and testing it will prevent such errors in the
future.
Link: https://lore.kernel.org/all/CAB7eexKUpvX-JNiLzhXBDWgfg2T9e9_0Tw4HQ6keN==voR…
Cc: stable(a)vger.kernel.org
Reported-and-tested-by: Rondreis <linhaoguo86(a)gmail.com>
Signed-off-by: Alan Stern <stern(a)rowland.harvard.edu>
Link: https://lore.kernel.org/r/YwkflDxvg0KWqyZK@rowland.harvard.edu
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yuyao Lin <linyuyao1(a)huawei.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com>
---
drivers/usb/core/hub.c | 10 ++++++++++
include/linux/usb.h | 2 ++
2 files changed, 12 insertions(+)
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 18ee3914b468..53b3d77fba6a 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -5967,6 +5967,11 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
* the reset is over (using their post_reset method).
*
* Return: The same as for usb_reset_and_verify_device().
+ * However, if a reset is already in progress (for instance, if a
+ * driver doesn't have pre_ or post_reset() callbacks, and while
+ * being unbound or re-bound during the ongoing reset its disconnect()
+ * or probe() routine tries to perform a second, nested reset), the
+ * routine returns -EINPROGRESS.
*
* Note:
* The caller must own the device lock. For example, it's safe to use
@@ -6000,6 +6005,10 @@ int usb_reset_device(struct usb_device *udev)
return -EISDIR;
}
+ if (udev->reset_in_progress)
+ return -EINPROGRESS;
+ udev->reset_in_progress = 1;
+
port_dev = hub->ports[udev->portnum - 1];
/*
@@ -6064,6 +6073,7 @@ int usb_reset_device(struct usb_device *udev)
usb_autosuspend_device(udev);
memalloc_noio_restore(noio_flag);
+ udev->reset_in_progress = 0;
return ret;
}
EXPORT_SYMBOL_GPL(usb_reset_device);
diff --git a/include/linux/usb.h b/include/linux/usb.h
index d6a41841b93e..a093667991bb 100644
--- a/include/linux/usb.h
+++ b/include/linux/usb.h
@@ -580,6 +580,7 @@ struct usb3_lpm_parameters {
* @devaddr: device address, XHCI: assigned by HW, others: same as devnum
* @can_submit: URBs may be submitted
* @persist_enabled: USB_PERSIST enabled for this device
+ * @reset_in_progress: the device is being reset
* @have_langid: whether string_langid is valid
* @authorized: policy has said we can use it;
* (user space) policy determines if we authorize this device to be
@@ -665,6 +666,7 @@ struct usb_device {
unsigned can_submit:1;
unsigned persist_enabled:1;
+ unsigned reset_in_progress:1;
unsigned have_langid:1;
unsigned authorized:1;
unsigned authenticated:1;
--
2.25.1
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I47W8L
CVE: NA
---------------------------
This patch default set CONFIG_ARCH_LLC_128_LINE_SIZE=n.
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
arch/arm64/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 927d6666770e..003e333ad864 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -928,7 +928,7 @@ config ARCH_HAS_CACHE_LINE_SIZE
config ARCH_LLC_128_LINE_SIZE
bool "Force 128 bytes alignment for fitting LLC cacheline"
depends on ARM64
- default y
+ default n
help
As specific machine's LLC cacheline size may be up to
128 bytes, gaining performance improvement from fitting
--
2.25.1
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I47W8L
CVE: NA
---------------------------
We detect a performance deterioration when using Unixbench, we use the dichotomy to
locate the patch 7e66740ad725 ("MPAM / ACPI: Refactoring MPAM init process and set
MPAM ACPI as entrance"), In comparing two commit df5defd901ff ("KVM: X86: MMU: Use
the correct inherited permissions to get shadow page") and ac4dbb7554ef ("ACPI 6.x:
Add definitions for MPAM table") we get following testing result:
CMD: ./Run -c xx context1
RESULT:
+-------------UnixBench context1-----------+
+---------+--------------+-----------------+
+ + ac4dbb7554ef + df5defd901ff +
+---------+--------------+---------+-------+
+ Cores + Score + Score +
+---------+--------------+-----------------+
+ 1 + 522.8 + 535.7 +
+---------+--------------+-----------------+
+ 24 + 11231.5 + 12111.2 +
+---------+--------------+-----------------+
+ 48 + 8535.1 + 8745.1 +
+---------+--------------+-----------------+
+ 72 + 10821.9 + 10343.8 +
+---------+--------------+-----------------+
+ 96 + 15238.5 + 42947.8 +
+---------+--------------+-----------------+
We found a irrefutable difference in latency sampling when using the perf tool:
HEAD:ac4dbb7554ef HEAD:df5defd901ff
45.18% [kernel] [k] ktime_get_coarse_real_ts64 -> 1.78% [kernel] [k] ktime_get_coarse_real_ts64
...
65.87 │ dmb ishld //smp_rmb()
Through ftrace we get the calltrace and and detected the number of visits of
ktime_get_coarse_real_ts64, which frequently visits tk_core->seq and
tk_core->timekeeper->tkr_mono:
- 48.86% [kernel] [k] ktime_get_coarse_real_ts64
- 5.76% ktime_get_coarse_real_ts64 #about 111437657 times per 10 seconds
- 14.70% __audit_syscall_entry
syscall_trace_enter
el0_svc_common
el0_svc_handler
+ el0_svc
- 2.85% current_time
So this may be performance degradation caused by interference when happened different
fields access, We compare .bss and .data section of this two version:
HEAD:ac4dbb7554ef
`->
ffff00000962e680 l O .bss 0000000000000110 tk_core
ffff000009355680 l O .data 0000000000000078 tk_fast_mono
ffff0000093557a0 l O .data 0000000000000090 dummy_clock
ffff000009355700 l O .data 0000000000000078 tk_fast_raw
ffff000009355778 l O .data 0000000000000028 timekeeping_syscore_ops
ffff00000962e640 l O .bss 0000000000000008 cycles_at_suspend
HEAD:df5defd901ff
`->
ffff00000957dbc0 l O .bss 0000000000000110 tk_core
ffff0000092b4e80 l O .data 0000000000000078 tk_fast_mono
ffff0000092b4fa0 l O .data 0000000000000090 dummy_clock
ffff0000092b4f00 l O .data 0000000000000078 tk_fast_raw
ffff0000092b4f78 l O .data 0000000000000028 timekeeping_syscore_ops
ffff00000957db80 l O .bss 0000000000000008 cycles_at_suspend
By comparing this two version tk_core's address: ffff00000962e680 is 128Byte aligned
but latter df5defd901ff is 64Byte aligned, the memory storage layout of tk_core has
undergone subtle changes:
HEAD:ac4dbb7554ef
`-> |<--------formmer 64Bytes---------->|<------------latter 64Byte------------->|
0xffff00000957dbc0_>|<-seq 8Bytes->|<-tkr_mono 56Bytes->|<-thr_raw 56Bytes->|<-xtime_sec 8Bytes->|
0xffff00000957dc00_>...
HEAD:df5defd901ff
`-> |<------formmer 64Bytes---->|<------------latter 64Byte-------->|
0xffff00000962e680_>|<-Other variables 64Bytes->|<-seq 8Bytes->|<-tkr_mono 56Bytes->|
0xffff00000962e6c0_>..
We testified thr_raw,xtime_sec fields interfere strongly with seq,tkr_mono field because of
frequent load/store operation, this will cause as known false sharing.
We add a 64Bytes padding field in tk_core for reservation of any after usefull usage and
keep tk_core 128Byte aligned, this can avoid changes in the way tk_core's layout is stored,
In this solution, layout of tk_core always like this:
crash> struct -o tk_core_t
struct tk_core_t {
[0] u64 padding[8];
[64] seqcount_t seq;
[72] struct timekeeper timekeeper;
}
SIZE: 336
crash> struct -o timekeeper
struct timekeeper {
[0] struct tk_read_base tkr_mono;
[56] struct tk_read_base tkr_raw;
[112] u64 xtime_sec;
[120] unsigned long ktime_sec;
...
}
SIZE: 264
After appling our own solution:
+---------+--------------+
+ + Our solution +
+---------+--------------+
+ Cores + Score +
+---------+--------------+
+ 1 + 548.9 +
+---------+--------------+
+ 24 + 11018.3 +
+---------+--------------+
+ 48 + 8938.2 +
+---------+--------------+
+ 72 + 14610.7 +
+---------+--------------+
+ 96 + 40811.7 +
+---------+--------------+
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
arch/arm64/Kconfig | 9 +++++++++
arch/arm64/include/asm/cache.h | 6 ++++++
kernel/time/timekeeping.c | 7 +++++++
3 files changed, 22 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0ad6ce436355..927d6666770e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -925,6 +925,15 @@ config ARCH_WANT_HUGE_PMD_SHARE
config ARCH_HAS_CACHE_LINE_SIZE
def_bool y
+config ARCH_LLC_128_LINE_SIZE
+ bool "Force 128 bytes alignment for fitting LLC cacheline"
+ depends on ARM64
+ default y
+ help
+ As specific machine's LLC cacheline size may be up to
+ 128 bytes, gaining performance improvement from fitting
+ 128 Bytes LLC cache aligned.
+
config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode"
---help---
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index d1c46d75885f..ccb013f822ba 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -40,6 +40,12 @@
#define L1_CACHE_SHIFT (6)
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+#ifdef CONFIG_ARCH_LLC_128_LINE_SIZE
+#ifndef ____cacheline_aligned_128
+#define ____cacheline_aligned_128 __attribute__((__aligned__(128)))
+#endif
+#endif
+
#define CLIDR_LOUU_SHIFT 27
#define CLIDR_LOC_SHIFT 24
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index f246818e35db..0ebfe476b6b4 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -48,9 +48,16 @@ enum timekeeping_adv_mode {
* cache line.
*/
static struct {
+#ifdef CONFIG_ARCH_LLC_128_LINE_SIZE
+ u64 padding[8];
+#endif
seqcount_t seq;
struct timekeeper timekeeper;
+#ifdef CONFIG_ARCH_LLC_128_LINE_SIZE
+} tk_core ____cacheline_aligned_128 = {
+#else
} tk_core ____cacheline_aligned = {
+#endif
.seq = SEQCNT_ZERO(tk_core.seq),
};
--
2.25.1
From: Guo Xuenan <guoxuenan(a)huawei.com>
mainline inclusion
from mainline-v6.1-rc4
commit 8c25febf23963431686f04874b96321288504127
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4KIAO
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
xfs_btree_check_block contains debugging knobs. With XFS_DEBUG setting up,
turn on the debugging knob can trigger the assert of xfs_btree_islastblock,
test script as follows:
while true
do
mount $disk $mountpoint
fsstress -d $testdir -l 0 -n 10000 -p 4 >/dev/null
echo 1 > /sys/fs/xfs/sda/errortag/btree_chk_sblk
sleep 10
umount $mountpoint
done
Kick off fsstress and only *then* turn on the debugging knob. If it
happens that the knob gets turned on after the cntbt lookup succeeds
but before the call to xfs_btree_islastblock, then we *can* end up in
the situation where a previously checked btree block suddenly starts
returning EFSCORRUPTED from xfs_btree_check_block. Kaboom.
Darrick give a very detailed explanation as follows:
Looking back at commit 27d9ee577dcce, I think the point of all this was
to make sure that the cursor has actually performed a lookup, and that
the btree block at whatever level we're asking about is ok.
If the caller hasn't ever done a lookup, the bc_levels array will be
empty, so cur->bc_levels[level].bp pointer will be NULL. The call to
xfs_btree_get_block will crash anyway, so the "ASSERT(block);" part is
pointless.
If the caller did a lookup but the lookup failed due to block
corruption, the corresponding cur->bc_levels[level].bp pointer will also
be NULL, and we'll still crash. The "ASSERT(xfs_btree_check_block);"
logic is also unnecessary.
If the cursor level points to an inode root, the block buffer will be
incore, so it had better always be consistent.
If the caller ignores a failed lookup after a successful one and calls
this function, the cursor state is garbage and the assert wouldn't have
tripped anyway. So get rid of the assert.
Fixes: 27d9ee577dcc ("xfs: actually check xfs_btree_check_block return in xfs_btree_islastblock")
Signed-off-by: Guo Xuenan <guoxuenan(a)huawei.com>
Reviewed-by: Darrick J. Wong <djwong(a)kernel.org>
Signed-off-by: Darrick J. Wong <djwong(a)kernel.org>
Signed-off-by: Guo Xuenan <guoxuenan(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/xfs/libxfs/xfs_btree.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h
index 10e50cbacacf..ba11d2a4b686 100644
--- a/fs/xfs/libxfs/xfs_btree.h
+++ b/fs/xfs/libxfs/xfs_btree.h
@@ -523,7 +523,6 @@ xfs_btree_islastblock(
struct xfs_buf *bp;
block = xfs_btree_get_block(cur, level, &bp);
- ASSERT(block && xfs_btree_check_block(cur, block, level, bp) == 0);
if (cur->bc_flags & XFS_BTREE_LONG_PTRS)
return block->bb_u.l.bb_rightsib == cpu_to_be64(NULLFSBLOCK);
--
2.20.1