mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

October 2021

  • 26 participants
  • 189 discussions
[PATCH openEuler-1.0-LTS] soc: aspeed: lpc-ctrl: Fix boundary check for mmap
by Yang Yingliang 18 Oct '21

18 Oct '21
From: Iwona Winiarska <iwona.winiarska(a)intel.com> stable inclusion from linux-4.19.207 commit 9c8891b638319ddba9cfa330247922cd960c95b0 CVE: CVE-2021-42252 -------------------------------- commit b49a0e69a7b1a68c8d3f64097d06dabb770fec96 upstream. The check mixes pages (vm_pgoff) with bytes (vm_start, vm_end) on one side of the comparison, and uses resource address (rather than just the resource size) on the other side of the comparison. This can allow malicious userspace to easily bypass the boundary check and map pages that are located outside memory-region reserved by the driver. Fixes: 6c4e97678501 ("drivers/misc: Add Aspeed LPC control driver") Cc: stable(a)vger.kernel.org Signed-off-by: Iwona Winiarska <iwona.winiarska(a)intel.com> Reviewed-by: Andrew Jeffery <andrew(a)aj.id.au> Tested-by: Andrew Jeffery <andrew(a)aj.id.au> Reviewed-by: Joel Stanley <joel(a)aj.id.au> Signed-off-by: Joel Stanley <joel(a)jms.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- drivers/misc/aspeed-lpc-ctrl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/misc/aspeed-lpc-ctrl.c b/drivers/misc/aspeed-lpc-ctrl.c index a024f8042259a..870ab0dfcde06 100644 --- a/drivers/misc/aspeed-lpc-ctrl.c +++ b/drivers/misc/aspeed-lpc-ctrl.c @@ -50,7 +50,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma) unsigned long vsize = vma->vm_end - vma->vm_start; pgprot_t prot = vma->vm_page_prot; - if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size) + if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT) return -EINVAL; /* ast2400/2500 AHB accesses are not cache coherent */ -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS 1/2] mmap: userswap: fix memory leak in do_mmap
by Yang Yingliang 18 Oct '21

18 Oct '21
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHP2 CVE: NA ------------------------------------------------- When userswap is enabled, the memory pointed by 'pages' is not freed in abnormal branch in do_mmap(). To fix the issue and keep do_mmap() mostly unchanged, we rename do_mmap() to __do_mmap() and extract the memory alloc and free code out of __do_mmap(). When __do_mmap() returns a error value, we goto the error label to free the memory. Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- mm/mmap.c | 406 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 211 insertions(+), 195 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 90cff771af771..074fbc0c559a7 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1373,173 +1373,8 @@ int unregister_mmap_notifier(struct notifier_block *nb) EXPORT_SYMBOL_GPL(unregister_mmap_notifier); #endif -#ifdef CONFIG_USERSWAP -/* - * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get - * the reference of the pages and return the pages through input parameters - * 'ppages'. - */ -int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr, - unsigned long len, struct page ***ppages) -{ - struct vm_area_struct *vma; - struct page *page = NULL; - struct page **pages = NULL; - unsigned long addr_start, addr_end; - unsigned long ret; - int i, page_num = 0; - - pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL); - if (!pages) - return -ENOMEM; - - addr_start = addr; - addr_end = addr + len; - while (addr < addr_end) { - vma = find_vma(mm, addr); - if (!vma || !vma_is_anonymous(vma) || - (vma->vm_flags & VM_LOCKED) || vma->vm_file - || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) { - ret = -EINVAL; - goto out; - } - if (!(vma->vm_flags & VM_UFFD_MISSING)) { - ret = -EAGAIN; - goto out; - } -get_again: - /* follow_page will inc page ref, dec the ref after we remap the page */ - page = follow_page(vma, addr, FOLL_GET); - if (IS_ERR_OR_NULL(page)) { - ret = -ENODEV; - goto out; - } - pages[page_num] = page; - page_num++; - if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) { - ret = -EINVAL; - goto out; - } else if (PageTransCompound(page)) { - if (trylock_page(page)) { - if (!split_huge_page(page)) { - put_page(page); - page_num--; - unlock_page(page); - goto get_again; - } else { - unlock_page(page); - ret = -EINVAL; - goto out; - } - } else { - ret = -EINVAL; - goto out; - } - } - if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) { - ret = -EBUSY; - goto out; - } - addr += PAGE_SIZE; - } - - *ppages = pages; - return 0; - -out: - for (i = 0; i < page_num; i++) - put_page(pages[i]); - if (pages) - kfree(pages); - *ppages = NULL; - return ret; -} - -/* - * In uswap situation, we use the bit 0 of the returned address to indicate - * whether the pages are dirty. - */ -#define USWAP_PAGES_DIRTY 1 - -/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */ -unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start, - unsigned long len, struct page **pages, unsigned long new_addr) -{ - struct vm_area_struct *vma; - struct page *page; - pmd_t *pmd; - pte_t *pte, old_pte; - spinlock_t *ptl; - unsigned long addr, addr_end; - bool pages_dirty = false; - int i, err; - - addr_end = addr_start + len; - lru_add_drain(); - mmu_notifier_invalidate_range_start(mm, addr_start, addr_end); - addr = addr_start; - i = 0; - while (addr < addr_end) { - page = pages[i]; - vma = find_vma(mm, addr); - if (!vma) { - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - WARN_ON("find_vma failed\n"); - return -EINVAL; - } - pmd = mm_find_pmd(mm, addr); - if (!pmd) { - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - WARN_ON("mm_find_pmd failed, addr:%llx\n"); - return -ENXIO; - } - pte = pte_offset_map_lock(mm, pmd, addr, &ptl); - flush_cache_page(vma, addr, pte_pfn(*pte)); - old_pte = ptep_clear_flush(vma, addr, pte); - if (pte_dirty(old_pte) || PageDirty(page)) - pages_dirty = true; - set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page)))); - dec_mm_counter(mm, MM_ANONPAGES); - page_remove_rmap(page, false); - put_page(page); - - pte_unmap_unlock(pte, ptl); - vma->vm_flags |= VM_USWAP; - page->mapping = NULL; - addr += PAGE_SIZE; - i++; - } - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - - addr_start = new_addr; - addr_end = new_addr + len; - addr = addr_start; - vma = find_vma(mm, addr); - i = 0; - while (addr < addr_end) { - page = pages[i]; - if (addr > vma->vm_end - 1) - vma = find_vma(mm, addr); - err = vm_insert_page(vma, addr, page); - if (err) { - pr_err("vm_insert_page failed:%d\n", err); - } - i++; - addr += PAGE_SIZE; - } - vma->vm_flags |= VM_USWAP; - - if (pages_dirty) - new_addr = new_addr | USWAP_PAGES_DIRTY; - - return new_addr; -} -#endif - -/* - * The caller must hold down_write(&current->mm->mmap_sem). - */ -unsigned long do_mmap(struct file *file, unsigned long addr, +static inline +unsigned long __do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, @@ -1547,12 +1382,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr, { struct mm_struct *mm = current->mm; int pkey = 0; -#ifdef CONFIG_USERSWAP - struct page **pages = NULL; - unsigned long addr_start = addr; - int i, page_num = 0; - unsigned long ret; -#endif *populate = 0; @@ -1569,17 +1398,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (!(file && path_noexec(&file->f_path))) prot |= PROT_EXEC; -#ifdef CONFIG_USERSWAP - if (enable_userswap && (flags & MAP_REPLACE)) { - if (offset_in_page(addr) || (len % PAGE_SIZE)) - return -EINVAL; - page_num = len / PAGE_SIZE; - ret = pages_can_be_swapped(mm, addr, len, &pages); - if (ret) - return ret; - } -#endif - /* force arch specific MAP_FIXED handling in get_unmapped_area */ if (flags & MAP_FIXED_NOREPLACE) flags |= MAP_FIXED; @@ -1752,25 +1570,203 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (flags & MAP_CHECKNODE) set_vm_checknode(&vm_flags, flags); -#ifdef CONFIG_USERSWAP - /* mark the vma as special to avoid merging with other vmas */ - if (enable_userswap && (flags & MAP_REPLACE)) - vm_flags |= VM_SPECIAL; -#endif - addr = mmap_region(file, addr, len, vm_flags, pgoff, uf); if (!IS_ERR_VALUE(addr) && ((vm_flags & VM_LOCKED) || (flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE)) *populate = len; -#ifndef CONFIG_USERSWAP return addr; -#else - if (!enable_userswap || !(flags & MAP_REPLACE)) - return addr; +} + +#ifdef CONFIG_USERSWAP +/* + * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get + * the reference of the pages and return the pages through input parameters + * 'ppages'. + */ +int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr, + unsigned long len, struct page ***ppages) +{ + struct vm_area_struct *vma; + struct page *page = NULL; + struct page **pages = NULL; + unsigned long addr_start, addr_end; + unsigned long ret; + int i, page_num = 0; + + pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + addr_start = addr; + addr_end = addr + len; + while (addr < addr_end) { + vma = find_vma(mm, addr); + if (!vma || !vma_is_anonymous(vma) || + (vma->vm_flags & VM_LOCKED) || vma->vm_file + || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) { + ret = -EINVAL; + goto out; + } + if (!(vma->vm_flags & VM_UFFD_MISSING)) { + ret = -EAGAIN; + goto out; + } +get_again: + /* follow_page will inc page ref, dec the ref after we remap the page */ + page = follow_page(vma, addr, FOLL_GET); + if (IS_ERR_OR_NULL(page)) { + ret = -ENODEV; + goto out; + } + pages[page_num] = page; + page_num++; + if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) { + ret = -EINVAL; + goto out; + } else if (PageTransCompound(page)) { + if (trylock_page(page)) { + if (!split_huge_page(page)) { + put_page(page); + page_num--; + unlock_page(page); + goto get_again; + } else { + unlock_page(page); + ret = -EINVAL; + goto out; + } + } else { + ret = -EINVAL; + goto out; + } + } + if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) { + ret = -EBUSY; + goto out; + } + addr += PAGE_SIZE; + } + + *ppages = pages; + return 0; + +out: + for (i = 0; i < page_num; i++) + put_page(pages[i]); + if (pages) + kfree(pages); + *ppages = NULL; + return ret; +} + +/* + * In uswap situation, we use the bit 0 of the returned address to indicate + * whether the pages are dirty. + */ +#define USWAP_PAGES_DIRTY 1 + +/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */ +unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start, + unsigned long len, struct page **pages, unsigned long new_addr) +{ + struct vm_area_struct *vma; + struct page *page; + pmd_t *pmd; + pte_t *pte, old_pte; + spinlock_t *ptl; + unsigned long addr, addr_end; + bool pages_dirty = false; + int i, err; + addr_end = addr_start + len; + lru_add_drain(); + mmu_notifier_invalidate_range_start(mm, addr_start, addr_end); + addr = addr_start; + i = 0; + while (addr < addr_end) { + page = pages[i]; + vma = find_vma(mm, addr); + if (!vma) { + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + WARN_ON("find_vma failed\n"); + return -EINVAL; + } + pmd = mm_find_pmd(mm, addr); + if (!pmd) { + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + WARN_ON("mm_find_pmd failed, addr:%llx\n"); + return -ENXIO; + } + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + flush_cache_page(vma, addr, pte_pfn(*pte)); + old_pte = ptep_clear_flush(vma, addr, pte); + if (pte_dirty(old_pte) || PageDirty(page)) + pages_dirty = true; + set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page)))); + dec_mm_counter(mm, MM_ANONPAGES); + page_remove_rmap(page, false); + put_page(page); + + pte_unmap_unlock(pte, ptl); + vma->vm_flags |= VM_USWAP; + page->mapping = NULL; + addr += PAGE_SIZE; + i++; + } + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + + addr_start = new_addr; + addr_end = new_addr + len; + addr = addr_start; + vma = find_vma(mm, addr); + i = 0; + while (addr < addr_end) { + page = pages[i]; + if (addr > vma->vm_end - 1) + vma = find_vma(mm, addr); + err = vm_insert_page(vma, addr, page); + if (err) { + pr_err("vm_insert_page failed:%d\n", err); + } + i++; + addr += PAGE_SIZE; + } + vma->vm_flags |= VM_USWAP; + + if (pages_dirty) + new_addr = new_addr | USWAP_PAGES_DIRTY; + + return new_addr; +} + +static inline +unsigned long do_uswap_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flags, vm_flags_t vm_flags, + unsigned long pgoff, unsigned long *populate, + struct list_head *uf) +{ + struct mm_struct *mm = current->mm; + unsigned long addr_start = addr; + struct page **pages = NULL; + unsigned long ret; + int i, page_num = 0; + + if (!len || offset_in_page(addr) || (len % PAGE_SIZE)) + return -EINVAL; + + page_num = len / PAGE_SIZE; + ret = pages_can_be_swapped(mm, addr, len, &pages); + if (ret) + return ret; + + /* mark the vma as special to avoid merging with other vmas */ + vm_flags |= VM_SPECIAL; + + addr = __do_mmap(file, addr, len, prot, flags, vm_flags, pgoff, + populate, uf); if (IS_ERR_VALUE(addr)) { - pr_info("mmap_region failed, return addr:%lx\n", addr); ret = addr; goto out; } @@ -1780,10 +1776,30 @@ unsigned long do_mmap(struct file *file, unsigned long addr, /* follow_page() above increased the reference*/ for (i = 0; i < page_num; i++) put_page(pages[i]); + if (pages) kfree(pages); + return ret; +} +#endif + +/* + * The caller must hold down_write(&current->mm->mmap_sem). + */ +unsigned long do_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flags, vm_flags_t vm_flags, + unsigned long pgoff, unsigned long *populate, + struct list_head *uf) +{ +#ifdef CONFIG_USERSWAP + if (enable_userswap && (flags & MAP_REPLACE)) + return do_uswap_mmap(file, addr, len, prot, flags, vm_flags, + pgoff, populate, uf); #endif + return __do_mmap(file, addr, len, prot, flags, vm_flags, + pgoff, populate, uf); } unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len, -- 2.25.1
1 1
0 0
Re: [PATCH openEuler-21.03] MIPS: Fix kernel hang under FUNCTION_GRAPH_TRACER and PREEMPT_TRACER
by chengjian (D) 18 Oct '21

18 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com> 在 2021/10/16 23:06, jack 写道: > From: Tiezhu Yang <yangtiezhu(a)loongson.cn> > > stable inclusion > from stable-v5.10.44 > commit 7519ece673e300b0362572edbde7e030552705ec > bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=417 > CVE: NA > > ------------------------------------------------- > > [ Upstream commit 78cf0eb926cb1abeff2106bae67752e032fe5f3e ] > > When update the latest mainline kernel with the following three configs, > the kernel hangs during startup: > > (1) CONFIG_FUNCTION_GRAPH_TRACER=y > (2) CONFIG_PREEMPT_TRACER=y > (3) CONFIG_FTRACE_STARTUP_TEST=y > > When update the latest mainline kernel with the above two configs (1) > and (2), the kernel starts normally, but it still hangs when execute > the following command: > > echo "function_graph" > /sys/kernel/debug/tracing/current_tracer > > Without CONFIG_PREEMPT_TRACER=y, the above two kinds of kernel hangs > disappeared, so it seems that CONFIG_PREEMPT_TRACER has some influences > with function_graph tracer at the first glance. > > I use ejtag to find out the epc address is related with preempt_enable() > in the file arch/mips/lib/mips-atomic.c, because function tracing can > trace the preempt_{enable,disable} calls that are traced, replace them > with preempt_{enable,disable}_notrace to prevent function tracing from > going into an infinite loop, and then it can fix the kernel hang issue. > > By the way, it seems that this commit is a complement and improvement of > commit f93a1a00f2bd ("MIPS: Fix crash that occurs when function tracing > is enabled"). > > Signed-off-by: Tiezhu Yang <yangtiezhu(a)loongson.cn> > Cc: Steven Rostedt <rostedt(a)goodmis.org> > Signed-off-by: Thomas Bogendoerfer <tsbogend(a)alpha.franken.de> > Signed-off-by: Sasha Levin <sashal(a)kernel.org> > Signed-off-by: jack <18380124974(a)163.com> > --- > arch/mips/lib/mips-atomic.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/mips/lib/mips-atomic.c b/arch/mips/lib/mips-atomic.c > index de03838b343b..a9b72eacfc0b 100644 > --- a/arch/mips/lib/mips-atomic.c > +++ b/arch/mips/lib/mips-atomic.c > @@ -37,7 +37,7 @@ > */ > notrace void arch_local_irq_disable(void) > { > - preempt_disable(); > + preempt_disable_notrace(); > > __asm__ __volatile__( > " .set push \n" > @@ -53,7 +53,7 @@ notrace void arch_local_irq_disable(void) > : /* no inputs */ > : "memory"); > > - preempt_enable(); > + preempt_enable_notrace(); > } > EXPORT_SYMBOL(arch_local_irq_disable); > > @@ -61,7 +61,7 @@ notrace unsigned long arch_local_irq_save(void) > { > unsigned long flags; > > - preempt_disable(); > + preempt_disable_notrace(); > > __asm__ __volatile__( > " .set push \n" > @@ -78,7 +78,7 @@ notrace unsigned long arch_local_irq_save(void) > : /* no inputs */ > : "memory"); > > - preempt_enable(); > + preempt_enable_notrace(); > > return flags; > } > @@ -88,7 +88,7 @@ notrace void arch_local_irq_restore(unsigned long flags) > { > unsigned long __tmp1; > > - preempt_disable(); > + preempt_disable_notrace(); > > __asm__ __volatile__( > " .set push \n" > @@ -106,7 +106,7 @@ notrace void arch_local_irq_restore(unsigned long flags) > : "0" (flags) > : "memory"); > > - preempt_enable(); > + preempt_enable_notrace(); > } > EXPORT_SYMBOL(arch_local_irq_restore); >
1 0
0 0
[PATCH kernel-4.19] arm64/mpam: fix the problem that the ret variable is not initialized
by Yang Yingliang 18 Oct '21

18 Oct '21
From: wenzhiwei11 <wenzhiwei(a)kylinos.cn> kylin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHUL CVE: NA --------------------------------------------------- initialize the value "ret" in "schemata_list_init()" Signed-off-by: wenzhiwei11 <wenzhiwei(a)kylinos.cn> # openEuler_contributor Reviewed-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com> Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- arch/arm64/kernel/mpam/mpam_ctrlmon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c index b1d32d432556c..b7508ad8c5314 100644 --- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c +++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c @@ -127,7 +127,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r) int schemata_list_init(void) { - int ret; + int ret = 0; struct mpam_resctrl_res *res; struct resctrl_resource *r; -- 2.25.1
1 0
0 0
[PATCH openEuler-21.03] drm: Lock pointer access in drm_master_release()
by holmes 18 Oct '21

18 Oct '21
From: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com> stable inclusion from stable-v5.10.44 commit aa8591a58cbd2986090709e4202881f18e8ae30e bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=435 CVE: NA ------------------------------------------------- commit c336a5ee984708db4826ef9e47d184e638e29717 upstream. This patch eliminates the following smatch warning: drivers/gpu/drm/drm_auth.c:320 drm_master_release() warn: unlocked access 'master' (line 318) expected lock '&dev->master_mutex' The 'file_priv->master' field should be protected by the mutex lock to '&dev->master_mutex'. This is because other processes can concurrently modify this field and free the current 'file_priv->master' pointer. This could result in a use-after-free error when 'master' is dereferenced in subsequent function calls to 'drm_legacy_lock_master_cleanup()' or to 'drm_lease_revoke()'. An example of a scenario that would produce this error can be seen from a similar bug in 'drm_getunique()' that was reported by Syzbot: https://syzkaller.appspot.com/bug?id=148d2f1dfac64af52ffd27b661981a540724f8… In the Syzbot report, another process concurrently acquired the device's master mutex in 'drm_setmaster_ioctl()', then overwrote 'fpriv->master' in 'drm_new_set_master()'. The old value of 'fpriv->master' was subsequently freed before the mutex was unlocked. Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com> Cc: stable(a)vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210609092119.173590-1-desmo… Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: holmes <holmes(a)my.swjtu.edu.cn> --- drivers/gpu/drm/drm_auth.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c index f2d46b7ac6f9..232abbba3686 100644 --- a/drivers/gpu/drm/drm_auth.c +++ b/drivers/gpu/drm/drm_auth.c @@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv) void drm_master_release(struct drm_file *file_priv) { struct drm_device *dev = file_priv->minor->dev; - struct drm_master *master = file_priv->master; + struct drm_master *master; mutex_lock(&dev->master_mutex); + master = file_priv->master; if (file_priv->magic) idr_remove(&file_priv->master->magic_map, file_priv->magic); -- 2.23.0
3 4
0 0
[PATCH kernel-4.19 1/3] NFS: Add a helper nfs_client_for_each_server()
by Yang Yingliang 18 Oct '21

18 Oct '21
From: Trond Myklebust <trond.myklebust(a)hammerspace.com> mainline inclusion from mainline-v5.7-rc1 commit 3c9e502b59fbd243cfac7cc6c875e432d285102a category: bugfix bugzilla: 182252 CVE: NA ----------------------------------------------- Add a helper nfs_client_for_each_server() to iterate through all the filesystems that are attached to a struct nfs_client, and apply a function to all the active ones. Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com> Signed-off-by: ChenXiaoSong <chenxiaosong2(a)huawei.com> Reviewed-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- fs/nfs/internal.h | 4 +++- fs/nfs/super.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+), 1 deletion(-) diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index cc07189a501fa..c3c0b40f163f1 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -417,7 +417,9 @@ extern int __init register_nfs_fs(void); extern void __exit unregister_nfs_fs(void); extern bool nfs_sb_active(struct super_block *sb); extern void nfs_sb_deactive(struct super_block *sb); - +extern int nfs_client_for_each_server(struct nfs_client *clp, + int (*fn)(struct nfs_server *, void *), + void *data); /* io.c */ extern void nfs_start_io_read(struct inode *inode); extern void nfs_end_io_read(struct inode *inode); diff --git a/fs/nfs/super.c b/fs/nfs/super.c index fe107348aabe6..48bcdcf4d039e 100644 --- a/fs/nfs/super.c +++ b/fs/nfs/super.c @@ -429,6 +429,41 @@ void nfs_sb_deactive(struct super_block *sb) } EXPORT_SYMBOL_GPL(nfs_sb_deactive); +static int __nfs_list_for_each_server(struct list_head *head, + int (*fn)(struct nfs_server *, void *), + void *data) +{ + struct nfs_server *server, *last = NULL; + int ret = 0; + + rcu_read_lock(); + list_for_each_entry_rcu(server, head, client_link) { + if (!nfs_sb_active(server->super)) + continue; + rcu_read_unlock(); + if (last) + nfs_sb_deactive(last->super); + last = server; + ret = fn(server, data); + if (ret) + goto out; + rcu_read_lock(); + } + rcu_read_unlock(); +out: + if (last) + nfs_sb_deactive(last->super); + return ret; +} + +int nfs_client_for_each_server(struct nfs_client *clp, + int (*fn)(struct nfs_server *, void *), + void *data) +{ + return __nfs_list_for_each_server(&clp->cl_superblocks, fn, data); +} +EXPORT_SYMBOL_GPL(nfs_client_for_each_server); + /* * Deliver file system statistics to userspace */ -- 2.25.1
1 2
0 0
[PATCH kernel-4.19 1/2] mmap: userswap: fix memory leak in do_mmap
by Yang Yingliang 18 Oct '21

18 Oct '21
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHP2 CVE: NA ------------------------------------------------- When userswap is enabled, the memory pointed by 'pages' is not freed in abnormal branch in do_mmap(). To fix the issue and keep do_mmap() mostly unchanged, we rename do_mmap() to __do_mmap() and extract the memory alloc and free code out of __do_mmap(). When __do_mmap() returns a error value, we goto the error label to free the memory. Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- mm/mmap.c | 404 +++++++++++++++++++++++++++--------------------------- 1 file changed, 204 insertions(+), 200 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 378e1869ac7a0..69848726063c7 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1384,172 +1384,6 @@ static unsigned long __mmap_region(struct mm_struct *mm, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, struct list_head *uf); -#ifdef CONFIG_USERSWAP -/* - * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get - * the reference of the pages and return the pages through input parameters - * 'ppages'. - */ -int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr, - unsigned long len, struct page ***ppages) -{ - struct vm_area_struct *vma; - struct page *page = NULL; - struct page **pages = NULL; - unsigned long addr_start, addr_end; - unsigned long ret; - int i, page_num = 0; - - pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL); - if (!pages) - return -ENOMEM; - - addr_start = addr; - addr_end = addr + len; - while (addr < addr_end) { - vma = find_vma(mm, addr); - if (!vma || !vma_is_anonymous(vma) || - (vma->vm_flags & VM_LOCKED) || vma->vm_file - || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) { - ret = -EINVAL; - goto out; - } - if (!(vma->vm_flags & VM_UFFD_MISSING)) { - ret = -EAGAIN; - goto out; - } -get_again: - /* follow_page will inc page ref, dec the ref after we remap the page */ - page = follow_page(vma, addr, FOLL_GET); - if (IS_ERR_OR_NULL(page)) { - ret = -ENODEV; - goto out; - } - pages[page_num] = page; - page_num++; - if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) { - ret = -EINVAL; - goto out; - } else if (PageTransCompound(page)) { - if (trylock_page(page)) { - if (!split_huge_page(page)) { - put_page(page); - page_num--; - unlock_page(page); - goto get_again; - } else { - unlock_page(page); - ret = -EINVAL; - goto out; - } - } else { - ret = -EINVAL; - goto out; - } - } - if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) { - ret = -EBUSY; - goto out; - } - addr += PAGE_SIZE; - } - - *ppages = pages; - return 0; - -out: - for (i = 0; i < page_num; i++) - put_page(pages[i]); - if (pages) - kfree(pages); - *ppages = NULL; - return ret; -} - -/* - * In uswap situation, we use the bit 0 of the returned address to indicate - * whether the pages are dirty. - */ -#define USWAP_PAGES_DIRTY 1 - -/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */ -unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start, - unsigned long len, struct page **pages, unsigned long new_addr) -{ - struct vm_area_struct *vma; - struct page *page; - pmd_t *pmd; - pte_t *pte, old_pte; - spinlock_t *ptl; - unsigned long addr, addr_end; - bool pages_dirty = false; - int i, err; - - addr_end = addr_start + len; - lru_add_drain(); - mmu_notifier_invalidate_range_start(mm, addr_start, addr_end); - addr = addr_start; - i = 0; - while (addr < addr_end) { - page = pages[i]; - vma = find_vma(mm, addr); - if (!vma) { - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - WARN_ON("find_vma failed\n"); - return -EINVAL; - } - pmd = mm_find_pmd(mm, addr); - if (!pmd) { - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - WARN_ON("mm_find_pmd failed, addr:%llx\n"); - return -ENXIO; - } - pte = pte_offset_map_lock(mm, pmd, addr, &ptl); - flush_cache_page(vma, addr, pte_pfn(*pte)); - old_pte = ptep_clear_flush(vma, addr, pte); - if (pte_dirty(old_pte) || PageDirty(page)) - pages_dirty = true; - set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page)))); - dec_mm_counter(mm, MM_ANONPAGES); - page_remove_rmap(page, false); - put_page(page); - - pte_unmap_unlock(pte, ptl); - vma->vm_flags |= VM_USWAP; - page->mapping = NULL; - addr += PAGE_SIZE; - i++; - } - mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); - - addr_start = new_addr; - addr_end = new_addr + len; - addr = addr_start; - vma = find_vma(mm, addr); - i = 0; - while (addr < addr_end) { - page = pages[i]; - if (addr > vma->vm_end - 1) - vma = find_vma(mm, addr); - err = vm_insert_page(vma, addr, page); - if (err) { - pr_err("vm_insert_page failed:%d\n", err); - } - i++; - addr += PAGE_SIZE; - } - vma->vm_flags |= VM_USWAP; - - if (pages_dirty) - new_addr = new_addr | USWAP_PAGES_DIRTY; - - return new_addr; -} -#endif - -/* - * The caller must hold down_write(&current->mm->mmap_sem). - */ unsigned long __do_mmap(struct mm_struct *mm, struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, @@ -1557,12 +1391,6 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file, unsigned long *populate, struct list_head *uf) { int pkey = 0; -#ifdef CONFIG_USERSWAP - struct page **pages = NULL; - unsigned long addr_start = addr; - int i, page_num = 0; - unsigned long ret; -#endif *populate = 0; @@ -1579,17 +1407,6 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file, if (!(file && path_noexec(&file->f_path))) prot |= PROT_EXEC; -#ifdef CONFIG_USERSWAP - if (enable_userswap && (flags & MAP_REPLACE)) { - if (offset_in_page(addr) || (len % PAGE_SIZE)) - return -EINVAL; - page_num = len / PAGE_SIZE; - ret = pages_can_be_swapped(mm, addr, len, &pages); - if (ret) - return ret; - } -#endif - /* force arch specific MAP_FIXED handling in get_unmapped_area */ if (flags & MAP_FIXED_NOREPLACE) flags |= MAP_FIXED; @@ -1766,25 +1583,203 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file, if (flags & MAP_CHECKNODE) set_vm_checknode(&vm_flags, flags); -#ifdef CONFIG_USERSWAP - /* mark the vma as special to avoid merging with other vmas */ - if (enable_userswap && (flags & MAP_REPLACE)) - vm_flags |= VM_SPECIAL; -#endif - addr = __mmap_region(mm, file, addr, len, vm_flags, pgoff, uf); if (!IS_ERR_VALUE(addr) && ((vm_flags & VM_LOCKED) || (flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE)) *populate = len; -#ifndef CONFIG_USERSWAP return addr; -#else - if (!enable_userswap || !(flags & MAP_REPLACE)) - return addr; +} +#ifdef CONFIG_USERSWAP +/* + * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get + * the reference of the pages and return the pages through input parameters + * 'ppages'. + */ +int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr, + unsigned long len, struct page ***ppages) +{ + struct vm_area_struct *vma; + struct page *page = NULL; + struct page **pages = NULL; + unsigned long addr_start, addr_end; + unsigned long ret; + int i, page_num = 0; + + pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + addr_start = addr; + addr_end = addr + len; + while (addr < addr_end) { + vma = find_vma(mm, addr); + if (!vma || !vma_is_anonymous(vma) || + (vma->vm_flags & VM_LOCKED) || vma->vm_file + || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) { + ret = -EINVAL; + goto out; + } + if (!(vma->vm_flags & VM_UFFD_MISSING)) { + ret = -EAGAIN; + goto out; + } +get_again: + /* follow_page will inc page ref, dec the ref after we remap the page */ + page = follow_page(vma, addr, FOLL_GET); + if (IS_ERR_OR_NULL(page)) { + ret = -ENODEV; + goto out; + } + pages[page_num] = page; + page_num++; + if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) { + ret = -EINVAL; + goto out; + } else if (PageTransCompound(page)) { + if (trylock_page(page)) { + if (!split_huge_page(page)) { + put_page(page); + page_num--; + unlock_page(page); + goto get_again; + } else { + unlock_page(page); + ret = -EINVAL; + goto out; + } + } else { + ret = -EINVAL; + goto out; + } + } + if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) { + ret = -EBUSY; + goto out; + } + addr += PAGE_SIZE; + } + + *ppages = pages; + return 0; + +out: + for (i = 0; i < page_num; i++) + put_page(pages[i]); + if (pages) + kfree(pages); + *ppages = NULL; + return ret; +} + +/* + * In uswap situation, we use the bit 0 of the returned address to indicate + * whether the pages are dirty. + */ +#define USWAP_PAGES_DIRTY 1 + +/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */ +unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start, + unsigned long len, struct page **pages, unsigned long new_addr) +{ + struct vm_area_struct *vma; + struct page *page; + pmd_t *pmd; + pte_t *pte, old_pte; + spinlock_t *ptl; + unsigned long addr, addr_end; + bool pages_dirty = false; + int i, err; + + addr_end = addr_start + len; + lru_add_drain(); + mmu_notifier_invalidate_range_start(mm, addr_start, addr_end); + addr = addr_start; + i = 0; + while (addr < addr_end) { + page = pages[i]; + vma = find_vma(mm, addr); + if (!vma) { + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + WARN_ON("find_vma failed\n"); + return -EINVAL; + } + pmd = mm_find_pmd(mm, addr); + if (!pmd) { + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + WARN_ON("mm_find_pmd failed, addr:%llx\n"); + return -ENXIO; + } + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + flush_cache_page(vma, addr, pte_pfn(*pte)); + old_pte = ptep_clear_flush(vma, addr, pte); + if (pte_dirty(old_pte) || PageDirty(page)) + pages_dirty = true; + set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page)))); + dec_mm_counter(mm, MM_ANONPAGES); + page_remove_rmap(page, false); + put_page(page); + + pte_unmap_unlock(pte, ptl); + vma->vm_flags |= VM_USWAP; + page->mapping = NULL; + addr += PAGE_SIZE; + i++; + } + mmu_notifier_invalidate_range_end(mm, addr_start, addr_end); + + addr_start = new_addr; + addr_end = new_addr + len; + addr = addr_start; + vma = find_vma(mm, addr); + i = 0; + while (addr < addr_end) { + page = pages[i]; + if (addr > vma->vm_end - 1) + vma = find_vma(mm, addr); + err = vm_insert_page(vma, addr, page); + if (err) { + pr_err("vm_insert_page failed:%d\n", err); + } + i++; + addr += PAGE_SIZE; + } + vma->vm_flags |= VM_USWAP; + + if (pages_dirty) + new_addr = new_addr | USWAP_PAGES_DIRTY; + + return new_addr; +} + +static inline +unsigned long do_uswap_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flags, vm_flags_t vm_flags, + unsigned long pgoff, unsigned long *populate, + struct list_head *uf) +{ + struct mm_struct *mm = current->mm; + unsigned long addr_start = addr; + struct page **pages = NULL; + unsigned long ret; + int i, page_num = 0; + + if (!len || offset_in_page(addr) || (len % PAGE_SIZE)) + return -EINVAL; + + page_num = len / PAGE_SIZE; + ret = pages_can_be_swapped(mm, addr, len, &pages); + if (ret) + return ret; + + /* mark the vma as special to avoid merging with other vmas */ + vm_flags |= VM_SPECIAL; + + addr = __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags, + pgoff, populate, uf); if (IS_ERR_VALUE(addr)) { - pr_info("mmap_region failed, return addr:%lx\n", addr); ret = addr; goto out; } @@ -1794,23 +1789,32 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file, /* follow_page() above increased the reference*/ for (i = 0; i < page_num; i++) put_page(pages[i]); + if (pages) kfree(pages); + return ret; -#endif } +#endif /* * The caller must hold down_write(&current->mm->mmap_sem). */ -unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, - unsigned long prot, unsigned long flags, vm_flags_t vm_flags, - unsigned long pgoff, unsigned long *populate, struct list_head *uf) +unsigned long do_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flags, vm_flags_t vm_flags, + unsigned long pgoff, unsigned long *populate, + struct list_head *uf) { - return __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags, pgoff, populate, uf); +#ifdef CONFIG_USERSWAP + if (enable_userswap && (flags & MAP_REPLACE)) + return do_uswap_mmap(file, addr, len, prot, flags, vm_flags, + pgoff, populate, uf); +#endif + return __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags, + pgoff, populate, uf); } - unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long fd, unsigned long pgoff) -- 2.25.1
1 1
0 0
[PATCH kernel-4.19] blktrace: Fix uaf in blk_trace access after removing by sysfs
by Yang Yingliang 18 Oct '21

18 Oct '21
From: Zhihao Cheng <chengzhihao1(a)huawei.com> mainline inclusion from mainline-5.15-rc3 commit 5afedf670caf30a2b5a52da96eb7eac7dee6a9c9 category: bugfix bugzilla: 181454 CVE: NA --------------------------- There is an use-after-free problem triggered by following process: P1(sda) P2(sdb) echo 0 > /sys/block/sdb/trace/enable blk_trace_remove_queue synchronize_rcu blk_trace_free relay_close rcu_read_lock __blk_add_trace trace_note_tsk (Iterate running_trace_list) relay_close_buf relay_destroy_buf kfree(buf) trace_note(sdb's bt) relay_reserve buf->offset <- nullptr deference (use-after-free) !!! rcu_read_unlock [ 502.714379] BUG: kernel NULL pointer dereference, address: 0000000000000010 [ 502.715260] #PF: supervisor read access in kernel mode [ 502.715903] #PF: error_code(0x0000) - not-present page [ 502.716546] PGD 103984067 P4D 103984067 PUD 17592b067 PMD 0 [ 502.717252] Oops: 0000 [#1] SMP [ 502.720308] RIP: 0010:trace_note.isra.0+0x86/0x360 [ 502.732872] Call Trace: [ 502.733193] __blk_add_trace.cold+0x137/0x1a3 [ 502.733734] blk_add_trace_rq+0x7b/0xd0 [ 502.734207] blk_add_trace_rq_issue+0x54/0xa0 [ 502.734755] blk_mq_start_request+0xde/0x1b0 [ 502.735287] scsi_queue_rq+0x528/0x1140 ... [ 502.742704] sg_new_write.isra.0+0x16e/0x3e0 [ 502.747501] sg_ioctl+0x466/0x1100 Reproduce method: ioctl(/dev/sda, BLKTRACESETUP, blk_user_trace_setup[buf_size=127]) ioctl(/dev/sda, BLKTRACESTART) ioctl(/dev/sdb, BLKTRACESETUP, blk_user_trace_setup[buf_size=127]) ioctl(/dev/sdb, BLKTRACESTART) echo 0 > /sys/block/sdb/trace/enable & // Add delay(mdelay/msleep) before kernel enters blk_trace_free() ioctl$SG_IO(/dev/sda, SG_IO, ...) // Enters trace_note_tsk() after blk_trace_free() returned // Use mdelay in rcu region rather than msleep(which may schedule out) Remove blk_trace from running_list before calling blk_trace_free() by sysfs if blk_trace is at Blktrace_running state. Fixes: c71a896154119f ("blktrace: add ftrace plugin") Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com> Link: https://lore.kernel.org/r/20210923134921.109194-1-chengzhihao1@huawei.com Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Reviewed-by: Jason Yan <yanaijie(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- kernel/trace/blktrace.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index da0ee8cc15a72..2df442fb46bb2 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -1662,6 +1662,14 @@ static int blk_trace_remove_queue(struct request_queue *q) if (bt == NULL) return -EINVAL; + if (bt->trace_state == Blktrace_running) { + bt->trace_state = Blktrace_stopped; + spin_lock_irq(&running_trace_lock); + list_del_init(&bt->running_list); + spin_unlock_irq(&running_trace_lock); + relay_flush(bt->rchan); + } + put_probe_ref(); synchronize_rcu(); blk_trace_free(bt); -- 2.25.1
1 0
0 0
[PATCH openEuler-21.03] drm: Lock pointer access in drm_master_release()
by holmes 18 Oct '21

18 Oct '21
From: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com> stable inclusion from stable-v5.10.44 commit aa8591a58cbd2986090709e4202881f18e8ae30e bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=435 CVE: NA ------------------------------------------------- commit c336a5ee984708db4826ef9e47d184e638e29717 upstream. This patch eliminates the following smatch warning: drivers/gpu/drm/drm_auth.c:320 drm_master_release() warn: unlocked access 'master' (line 318) expected lock '&dev->master_mutex' The 'file_priv->master' field should be protected by the mutex lock to '&dev->master_mutex'. This is because other processes can concurrently modify this field and free the current 'file_priv->master' pointer. This could result in a use-after-free error when 'master' is dereferenced in subsequent function calls to 'drm_legacy_lock_master_cleanup()' or to 'drm_lease_revoke()'. An example of a scenario that would produce this error can be seen from a similar bug in 'drm_getunique()' that was reported by Syzbot: https://syzkaller.appspot.com/bug?id=148d2f1dfac64af52ffd27b661981a540724f8… In the Syzbot report, another process concurrently acquired the device's master mutex in 'drm_setmaster_ioctl()', then overwrote 'fpriv->master' in 'drm_new_set_master()'. The old value of 'fpriv->master' was subsequently freed before the mutex was unlocked. Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com> Cc: stable(a)vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210609092119.173590-1-desmo… Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: holmes <holmes(a)my.swjtu.edu.cn> --- drivers/gpu/drm/drm_auth.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c index f2d46b7ac6f9..232abbba3686 100644 --- a/drivers/gpu/drm/drm_auth.c +++ b/drivers/gpu/drm/drm_auth.c @@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv) void drm_master_release(struct drm_file *file_priv) { struct drm_device *dev = file_priv->minor->dev; - struct drm_master *master = file_priv->master; + struct drm_master *master; mutex_lock(&dev->master_mutex); + master = file_priv->master; if (file_priv->magic) idr_remove(&file_priv->master->magic_map, file_priv->magic); -- 2.23.0
1 1
0 0
[PATCH OLK-5.10 v2 0/1] scsi: spfc: initial commit the spfc module
by Yanling Song 18 Oct '21

18 Oct '21
Ramaxel inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4DBD7 CVE: NA Initial commit the spfc module for ramaxel Super FC adapter Changes since v1: - Add UNF_PORT_CFG_SET_PORT_STATE state Yanling Song (1): scsi: spfc: initial commit the spfc module arch/arm64/configs/openeuler_defconfig | 1 + arch/x86/configs/openeuler_defconfig | 1 + drivers/scsi/Kconfig | 1 + drivers/scsi/Makefile | 1 + drivers/scsi/spfc/Kconfig | 17 + drivers/scsi/spfc/Makefile | 47 + drivers/scsi/spfc/common/unf_common.h | 1755 +++++++ drivers/scsi/spfc/common/unf_disc.c | 1276 +++++ drivers/scsi/spfc/common/unf_disc.h | 51 + drivers/scsi/spfc/common/unf_event.c | 517 ++ drivers/scsi/spfc/common/unf_event.h | 84 + drivers/scsi/spfc/common/unf_exchg.c | 2317 +++++++++ drivers/scsi/spfc/common/unf_exchg.h | 436 ++ drivers/scsi/spfc/common/unf_exchg_abort.c | 825 +++ drivers/scsi/spfc/common/unf_exchg_abort.h | 23 + drivers/scsi/spfc/common/unf_fcstruct.h | 459 ++ drivers/scsi/spfc/common/unf_gs.c | 2521 +++++++++ drivers/scsi/spfc/common/unf_gs.h | 58 + drivers/scsi/spfc/common/unf_init.c | 353 ++ drivers/scsi/spfc/common/unf_io.c | 1220 +++++ drivers/scsi/spfc/common/unf_io.h | 96 + drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ++++ drivers/scsi/spfc/common/unf_io_abnormal.h | 19 + drivers/scsi/spfc/common/unf_log.h | 178 + drivers/scsi/spfc/common/unf_lport.c | 1008 ++++ drivers/scsi/spfc/common/unf_lport.h | 519 ++ drivers/scsi/spfc/common/unf_ls.c | 4884 ++++++++++++++++++ drivers/scsi/spfc/common/unf_ls.h | 61 + drivers/scsi/spfc/common/unf_npiv.c | 1005 ++++ drivers/scsi/spfc/common/unf_npiv.h | 47 + drivers/scsi/spfc/common/unf_npiv_portman.c | 360 ++ drivers/scsi/spfc/common/unf_npiv_portman.h | 17 + drivers/scsi/spfc/common/unf_portman.c | 2431 +++++++++ drivers/scsi/spfc/common/unf_portman.h | 96 + drivers/scsi/spfc/common/unf_rport.c | 2286 ++++++++ drivers/scsi/spfc/common/unf_rport.h | 301 ++ drivers/scsi/spfc/common/unf_scsi.c | 1463 ++++++ drivers/scsi/spfc/common/unf_scsi_common.h | 570 ++ drivers/scsi/spfc/common/unf_service.c | 1439 ++++++ drivers/scsi/spfc/common/unf_service.h | 66 + drivers/scsi/spfc/common/unf_type.h | 216 + drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ++++ drivers/scsi/spfc/hw/spfc_chipitf.h | 797 +++ drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1646 ++++++ drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 + drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 891 ++++ drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 + drivers/scsi/spfc/hw/spfc_cqm_main.c | 1257 +++++ drivers/scsi/spfc/hw/spfc_cqm_main.h | 414 ++ drivers/scsi/spfc/hw/spfc_cqm_object.c | 959 ++++ drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 + drivers/scsi/spfc/hw/spfc_hba.c | 1751 +++++++ drivers/scsi/spfc/hw/spfc_hba.h | 341 ++ drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ++++++ drivers/scsi/spfc/hw/spfc_io.c | 1193 +++++ drivers/scsi/spfc/hw/spfc_io.h | 138 + drivers/scsi/spfc/hw/spfc_lld.c | 998 ++++ drivers/scsi/spfc/hw/spfc_lld.h | 76 + drivers/scsi/spfc/hw/spfc_module.h | 297 ++ drivers/scsi/spfc/hw/spfc_parent_context.h | 269 + drivers/scsi/spfc/hw/spfc_queue.c | 4857 +++++++++++++++++ drivers/scsi/spfc/hw/spfc_queue.h | 711 +++ drivers/scsi/spfc/hw/spfc_service.c | 2169 ++++++++ drivers/scsi/spfc/hw/spfc_service.h | 282 + drivers/scsi/spfc/hw/spfc_utils.c | 102 + drivers/scsi/spfc/hw/spfc_utils.h | 202 + drivers/scsi/spfc/hw/spfc_wqe.c | 646 +++ drivers/scsi/spfc/hw/spfc_wqe.h | 239 + 68 files changed, 53555 insertions(+) create mode 100644 drivers/scsi/spfc/Kconfig create mode 100644 drivers/scsi/spfc/Makefile create mode 100644 drivers/scsi/spfc/common/unf_common.h create mode 100644 drivers/scsi/spfc/common/unf_disc.c create mode 100644 drivers/scsi/spfc/common/unf_disc.h create mode 100644 drivers/scsi/spfc/common/unf_event.c create mode 100644 drivers/scsi/spfc/common/unf_event.h create mode 100644 drivers/scsi/spfc/common/unf_exchg.c create mode 100644 drivers/scsi/spfc/common/unf_exchg.h create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h create mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h create mode 100644 drivers/scsi/spfc/common/unf_gs.c create mode 100644 drivers/scsi/spfc/common/unf_gs.h create mode 100644 drivers/scsi/spfc/common/unf_init.c create mode 100644 drivers/scsi/spfc/common/unf_io.c create mode 100644 drivers/scsi/spfc/common/unf_io.h create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h create mode 100644 drivers/scsi/spfc/common/unf_log.h create mode 100644 drivers/scsi/spfc/common/unf_lport.c create mode 100644 drivers/scsi/spfc/common/unf_lport.h create mode 100644 drivers/scsi/spfc/common/unf_ls.c create mode 100644 drivers/scsi/spfc/common/unf_ls.h create mode 100644 drivers/scsi/spfc/common/unf_npiv.c create mode 100644 drivers/scsi/spfc/common/unf_npiv.h create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h create mode 100644 drivers/scsi/spfc/common/unf_portman.c create mode 100644 drivers/scsi/spfc/common/unf_portman.h create mode 100644 drivers/scsi/spfc/common/unf_rport.c create mode 100644 drivers/scsi/spfc/common/unf_rport.h create mode 100644 drivers/scsi/spfc/common/unf_scsi.c create mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h create mode 100644 drivers/scsi/spfc/common/unf_service.c create mode 100644 drivers/scsi/spfc/common/unf_service.h create mode 100644 drivers/scsi/spfc/common/unf_type.h create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h create mode 100644 drivers/scsi/spfc/hw/spfc_hba.c create mode 100644 drivers/scsi/spfc/hw/spfc_hba.h create mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h create mode 100644 drivers/scsi/spfc/hw/spfc_io.c create mode 100644 drivers/scsi/spfc/hw/spfc_io.h create mode 100644 drivers/scsi/spfc/hw/spfc_lld.c create mode 100644 drivers/scsi/spfc/hw/spfc_lld.h create mode 100644 drivers/scsi/spfc/hw/spfc_module.h create mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h create mode 100644 drivers/scsi/spfc/hw/spfc_queue.c create mode 100644 drivers/scsi/spfc/hw/spfc_queue.h create mode 100644 drivers/scsi/spfc/hw/spfc_service.c create mode 100644 drivers/scsi/spfc/hw/spfc_service.h create mode 100644 drivers/scsi/spfc/hw/spfc_utils.c create mode 100644 drivers/scsi/spfc/hw/spfc_utils.h create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h -- 2.30.0
2 2
0 0
  • ← Newer
  • 1
  • ...
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • ...
  • 19
  • Older →

HyperKitty Powered by HyperKitty