CVE-2026-23113
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
14/02/2026
Last modified:
14/02/2026
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop<br />
<br />
Currently this is checked before running the pending work. Normally this<br />
is quite fine, as work items either end up blocking (which will create a<br />
new worker for other items), or they complete fairly quickly. But syzbot<br />
reports an issue where io-wq takes seemingly forever to exit, and with a<br />
bit of debugging, this turns out to be because it queues a bunch of big<br />
(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn&#39;t<br />
support ->read_iter(), loop_rw_iter() ends up handling them. Each read<br />
returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of<br />
these pending, processing the whole chain can take a long time. Easily<br />
longer than the syzbot uninterruptible sleep timeout of 140 seconds.<br />
This then triggers a complaint off the io-wq exit path:<br />
<br />
INFO: task syz.4.135:6326 blocked for more than 143 seconds.<br />
Not tainted syzkaller #0<br />
Blocked by coredump.<br />
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br />
task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000<br />
Call Trace:<br />
<br />
context_switch kernel/sched/core.c:5256 [inline]<br />
__schedule+0x1139/0x6150 kernel/sched/core.c:6863<br />
__schedule_loop kernel/sched/core.c:6945 [inline]<br />
schedule+0xe7/0x3a0 kernel/sched/core.c:6960<br />
schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75<br />
do_wait_for_common kernel/sched/completion.c:100 [inline]<br />
__wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121<br />
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]<br />
io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356<br />
io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203<br />
io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651<br />
io_uring_files_cancel include/linux/io_uring.h:19 [inline]<br />
do_exit+0x2ce/0x2bd0 kernel/exit.c:911<br />
do_group_exit+0xd3/0x2a0 kernel/exit.c:1112<br />
get_signal+0x2671/0x26d0 kernel/signal.c:3034<br />
arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337<br />
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]<br />
exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75<br />
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]<br />
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]<br />
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]<br />
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]<br />
do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100<br />
entry_SYSCALL_64_after_hwframe+0x77/0x7f<br />
RIP: 0033:0x7fa02738f749<br />
RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca<br />
RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749<br />
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098<br />
RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000<br />
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000<br />
R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98<br />
<br />
There&#39;s really nothing wrong here, outside of processing these reads<br />
will take a LONG time. However, we can speed up the exit by checking the<br />
IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will<br />
exit the ring after queueing up all of these reads. Then once the first<br />
item is processed, io-wq will simply cancel the rest. That should avoid<br />
syzbot running into this complaint again.



