On Fri, 24 Sept 2021 at 10:33, Yunsheng Lin linyunsheng@huawei.com wrote:
On 2021/9/23 19:47, Ilias Apalodimas wrote:
On Thu, 23 Sept 2021 at 14:24, Yunsheng Lin linyunsheng@huawei.com wrote:
On 2021/9/23 16:33, Ilias Apalodimas wrote:
On Wed, Sep 22, 2021 at 05:41:27PM +0800, Yunsheng Lin wrote:
As the pp page for a skb frag is always a head page, so make sure skb_pp_recycle() passes a head page to avoid calling compound_head() for skb frag page case.
Doesn't that rely on the driver mostly (i.e what's passed in skb_frag_set_page() ? None of the current netstack code assumes bv_page is the head page of a compound page. Since our page_pool allocator can will allocate compound pages for order > 0, why should we rely on it ?
As the page pool alloc function return 'struct page *' to the caller, which is the head page of a compound pages for order > 0, so I assume the caller will pass that to skb_frag_set_page().
Yea that's exactly the assumption I was afraid of. Sure not passing the head page might seem weird atm and the assumption stands, but the point is we shouldn't blow up the entire network stack if someone does that eventually.
For non-pp page, I assume it is ok whether the page is a head page or tail page, as the pp_magic for both of them are not set with PP_SIGNATURE.
Yea that's true, although we removed the checking for coalescing recyclable and non-recyclable SKBs, the next patch first checks the signature before trying to do anything with the skb.
Or should we play safe here, and do the trick as skb_free_head() does in patch 6?
I don't think the &1 will even be measurable, so I'd suggest just dropping this and play safe?
I am not sure what does '&1' mean above.
I meant the check compound_head() is doing before deciding on the head page.
The one thing I am not sure about the trick done in patch 6 is that if __page_frag_cache_drain() is right API to use here, I used it because it is the only API that is expecting a head page.
Yea seemed a bit funny to me in the first place, until I figured out what exactly it was doing.
Regards /Ilias
Cheers /Ilias
Thanks /Ilias
Signed-off-by: Yunsheng Lin linyunsheng@huawei.com
include/linux/skbuff.h | 2 +- net/core/page_pool.c | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 6bdb0db3e825..35eebc2310a5 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -4722,7 +4722,7 @@ static inline bool skb_pp_recycle(struct sk_buff *skb, void *data) { if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle) return false;
- return page_pool_return_skb_page(virt_to_page(data));
- return page_pool_return_skb_page(virt_to_head_page(data));
}
#endif /* __KERNEL__ */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f7e71dcb6a2e..357fb53343a0 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -742,8 +742,6 @@ bool page_pool_return_skb_page(struct page *page) { struct page_pool *pp;
- page = compound_head(page);
- /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the * head page of compound page and bit 1 for pfmemalloc page, so
-- 2.33.0
.
.