CVE-2023-54157
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
24/12/2025
Last modified:
24/12/2025
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
binder: fix UAF of alloc->vma in race with munmap()<br />
<br />
[ cmllamas: clean forward port from commit 015ac18be7de ("binder: fix<br />
UAF of alloc->vma in race with munmap()") in 5.10 stable. It is needed<br />
in mainline after the revert of commit a43cfc87caaf ("android: binder:<br />
stop saving a pointer to the VMA") as pointed out by Liam. The commit<br />
log and tags have been tweaked to reflect this. ]<br />
<br />
In commit 720c24192404 ("ANDROID: binder: change down_write to<br />
down_read") binder assumed the mmap read lock is sufficient to protect<br />
alloc->vma inside binder_update_page_range(). This used to be accurate<br />
until commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in<br />
munmap"), which now downgrades the mmap_lock after detaching the vma<br />
from the rbtree in munmap(). Then it proceeds to teardown and free the<br />
vma with only the read lock held.<br />
<br />
This means that accesses to alloc->vma in binder_update_page_range() now<br />
will race with vm_area_free() in munmap() and can cause a UAF as shown<br />
in the following KASAN trace:<br />
<br />
==================================================================<br />
BUG: KASAN: use-after-free in vm_insert_page+0x7c/0x1f0<br />
Read of size 8 at addr ffff16204ad00600 by task server/558<br />
<br />
CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1<br />
Hardware name: linux,dummy-virt (DT)<br />
Call trace:<br />
dump_backtrace+0x0/0x2a0<br />
show_stack+0x18/0x2c<br />
dump_stack+0xf8/0x164<br />
print_address_description.constprop.0+0x9c/0x538<br />
kasan_report+0x120/0x200<br />
__asan_load8+0xa0/0xc4<br />
vm_insert_page+0x7c/0x1f0<br />
binder_update_page_range+0x278/0x50c<br />
binder_alloc_new_buf+0x3f0/0xba0<br />
binder_transaction+0x64c/0x3040<br />
binder_thread_write+0x924/0x2020<br />
binder_ioctl+0x1610/0x2e5c<br />
__arm64_sys_ioctl+0xd4/0x120<br />
el0_svc_common.constprop.0+0xac/0x270<br />
do_el0_svc+0x38/0xa0<br />
el0_svc+0x1c/0x2c<br />
el0_sync_handler+0xe8/0x114<br />
el0_sync+0x180/0x1c0<br />
<br />
Allocated by task 559:<br />
kasan_save_stack+0x38/0x6c<br />
__kasan_kmalloc.constprop.0+0xe4/0xf0<br />
kasan_slab_alloc+0x18/0x2c<br />
kmem_cache_alloc+0x1b0/0x2d0<br />
vm_area_alloc+0x28/0x94<br />
mmap_region+0x378/0x920<br />
do_mmap+0x3f0/0x600<br />
vm_mmap_pgoff+0x150/0x17c<br />
ksys_mmap_pgoff+0x284/0x2dc<br />
__arm64_sys_mmap+0x84/0xa4<br />
el0_svc_common.constprop.0+0xac/0x270<br />
do_el0_svc+0x38/0xa0<br />
el0_svc+0x1c/0x2c<br />
el0_sync_handler+0xe8/0x114<br />
el0_sync+0x180/0x1c0<br />
<br />
Freed by task 560:<br />
kasan_save_stack+0x38/0x6c<br />
kasan_set_track+0x28/0x40<br />
kasan_set_free_info+0x24/0x4c<br />
__kasan_slab_free+0x100/0x164<br />
kasan_slab_free+0x14/0x20<br />
kmem_cache_free+0xc4/0x34c<br />
vm_area_free+0x1c/0x2c<br />
remove_vma+0x7c/0x94<br />
__do_munmap+0x358/0x710<br />
__vm_munmap+0xbc/0x130<br />
__arm64_sys_munmap+0x4c/0x64<br />
el0_svc_common.constprop.0+0xac/0x270<br />
do_el0_svc+0x38/0xa0<br />
el0_svc+0x1c/0x2c<br />
el0_sync_handler+0xe8/0x114<br />
el0_sync+0x180/0x1c0<br />
<br />
[...]<br />
==================================================================<br />
<br />
To prevent the race above, revert back to taking the mmap write lock<br />
inside binder_update_page_range(). One might expect an increase of mmap<br />
lock contention. However, binder already serializes these calls via top<br />
level alloc->mutex. Also, there was no performance impact shown when<br />
running the binder benchmark tests.



