CVE-2022-48689
Severity CVSS v4.0:
Pending analysis
Type:
CWE-362
Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')
Publication date:
03/05/2024
Last modified:
30/10/2024
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
tcp: TX zerocopy should not sense pfmemalloc status<br />
<br />
We got a recent syzbot report [1] showing a possible misuse<br />
of pfmemalloc page status in TCP zerocopy paths.<br />
<br />
Indeed, for pages coming from user space or other layers,<br />
using page_is_pfmemalloc() is moot, and possibly could give<br />
false positives.<br />
<br />
There has been attempts to make page_is_pfmemalloc() more robust,<br />
but not using it in the first place in this context is probably better,<br />
removing cpu cycles.<br />
<br />
Note to stable teams :<br />
<br />
You need to backport 84ce071e38a6 ("net: introduce<br />
__skb_fill_page_desc_noacc") as a prereq.<br />
<br />
Race is more probable after commit c07aea3ef4d4<br />
("mm: add a signature in struct page") because page_is_pfmemalloc()<br />
is now using low order bit from page->lru.next, which can change<br />
more often than page->index.<br />
<br />
Low order bit should never be set for lru.next (when used as an anchor<br />
in LRU list), so KCSAN report is mostly a false positive.<br />
<br />
Backporting to older kernel versions seems not necessary.<br />
<br />
[1]<br />
BUG: KCSAN: data-race in lru_add_fn / tcp_build_frag<br />
<br />
write to 0xffffea0004a1d2c8 of 8 bytes by task 18600 on cpu 0:<br />
__list_add include/linux/list.h:73 [inline]<br />
list_add include/linux/list.h:88 [inline]<br />
lruvec_add_folio include/linux/mm_inline.h:105 [inline]<br />
lru_add_fn+0x440/0x520 mm/swap.c:228<br />
folio_batch_move_lru+0x1e1/0x2a0 mm/swap.c:246<br />
folio_batch_add_and_move mm/swap.c:263 [inline]<br />
folio_add_lru+0xf1/0x140 mm/swap.c:490<br />
filemap_add_folio+0xf8/0x150 mm/filemap.c:948<br />
__filemap_get_folio+0x510/0x6d0 mm/filemap.c:1981<br />
pagecache_get_page+0x26/0x190 mm/folio-compat.c:104<br />
grab_cache_page_write_begin+0x2a/0x30 mm/folio-compat.c:116<br />
ext4_da_write_begin+0x2dd/0x5f0 fs/ext4/inode.c:2988<br />
generic_perform_write+0x1d4/0x3f0 mm/filemap.c:3738<br />
ext4_buffered_write_iter+0x235/0x3e0 fs/ext4/file.c:270<br />
ext4_file_write_iter+0x2e3/0x1210<br />
call_write_iter include/linux/fs.h:2187 [inline]<br />
new_sync_write fs/read_write.c:491 [inline]<br />
vfs_write+0x468/0x760 fs/read_write.c:578<br />
ksys_write+0xe8/0x1a0 fs/read_write.c:631<br />
__do_sys_write fs/read_write.c:643 [inline]<br />
__se_sys_write fs/read_write.c:640 [inline]<br />
__x64_sys_write+0x3e/0x50 fs/read_write.c:640<br />
do_syscall_x64 arch/x86/entry/common.c:50 [inline]<br />
do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80<br />
entry_SYSCALL_64_after_hwframe+0x63/0xcd<br />
<br />
read to 0xffffea0004a1d2c8 of 8 bytes by task 18611 on cpu 1:<br />
page_is_pfmemalloc include/linux/mm.h:1740 [inline]<br />
__skb_fill_page_desc include/linux/skbuff.h:2422 [inline]<br />
skb_fill_page_desc include/linux/skbuff.h:2443 [inline]<br />
tcp_build_frag+0x613/0xb20 net/ipv4/tcp.c:1018<br />
do_tcp_sendpages+0x3e8/0xaf0 net/ipv4/tcp.c:1075<br />
tcp_sendpage_locked net/ipv4/tcp.c:1140 [inline]<br />
tcp_sendpage+0x89/0xb0 net/ipv4/tcp.c:1150<br />
inet_sendpage+0x7f/0xc0 net/ipv4/af_inet.c:833<br />
kernel_sendpage+0x184/0x300 net/socket.c:3561<br />
sock_sendpage+0x5a/0x70 net/socket.c:1054<br />
pipe_to_sendpage+0x128/0x160 fs/splice.c:361<br />
splice_from_pipe_feed fs/splice.c:415 [inline]<br />
__splice_from_pipe+0x222/0x4d0 fs/splice.c:559<br />
splice_from_pipe fs/splice.c:594 [inline]<br />
generic_splice_sendpage+0x89/0xc0 fs/splice.c:743<br />
do_splice_from fs/splice.c:764 [inline]<br />
direct_splice_actor+0x80/0xa0 fs/splice.c:931<br />
splice_direct_to_actor+0x305/0x620 fs/splice.c:886<br />
do_splice_direct+0xfb/0x180 fs/splice.c:974<br />
do_sendfile+0x3bf/0x910 fs/read_write.c:1249<br />
__do_sys_sendfile64 fs/read_write.c:1317 [inline]<br />
__se_sys_sendfile64 fs/read_write.c:1303 [inline]<br />
__x64_sys_sendfile64+0x10c/0x150 fs/read_write.c:1303<br />
do_syscall_x64 arch/x86/entry/common.c:50 [inline]<br />
do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80<br />
entry_SYSCALL_64_after_hwframe+0x63/0xcd<br />
<br />
value changed: 0x0000000000000000 -> 0xffffea0004a1d288<br />
<br />
Reported by Kernel Concurrency Sanitizer on:<br />
CPU: 1 PID: 18611 Comm: syz-executor.4 Not tainted 6.0.0-rc2-syzkaller-00248-ge022620b5d05-dirty #0<br />
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Impact
Base Score 3.x
7.00
Severity 3.x
HIGH
Vulnerable products and versions
| CPE | From | Up to |
|---|---|---|
| cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* | 5.14 (including) | 5.15.68 (excluding) |
| cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* | 5.16 (including) | 5.19.9 (excluding) |
To consult the complete list of CPE names with products and versions, see this page



