CVE-2026-43121
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
06/05/2026
Last modified:
06/05/2026
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
io_uring/zcrx: fix user_ref race between scrub and refill paths<br />
<br />
The io_zcrx_put_niov_uref() function uses a non-atomic<br />
check-then-decrement pattern (atomic_read followed by separate<br />
atomic_dec) to manipulate user_refs. This is serialized against other<br />
callers by rq_lock, but io_zcrx_scrub() modifies the same counter with<br />
atomic_xchg() WITHOUT holding rq_lock.<br />
<br />
On SMP systems, the following race exists:<br />
<br />
CPU0 (refill, holds rq_lock) CPU1 (scrub, no rq_lock)<br />
put_niov_uref:<br />
atomic_read(uref) - 1<br />
// window opens<br />
atomic_xchg(uref, 0) - 1<br />
return_niov_freelist(niov) [PUSH #1]<br />
// window closes<br />
atomic_dec(uref) - wraps to -1<br />
returns true<br />
return_niov(niov)<br />
return_niov_freelist(niov) [PUSH #2: DOUBLE-FREE]<br />
<br />
The same niov is pushed to the freelist twice, causing free_count to<br />
exceed nr_iovs. Subsequent freelist pushes then perform an out-of-bounds<br />
write (a u32 value) past the kvmalloc&#39;d freelist array into the adjacent<br />
slab object.<br />
<br />
Fix this by replacing the non-atomic read-then-dec in<br />
io_zcrx_put_niov_uref() with an atomic_try_cmpxchg loop that atomically<br />
tests and decrements user_refs. This makes the operation safe against<br />
concurrent atomic_xchg from scrub without requiring scrub to acquire<br />
rq_lock.<br />
<br />
[pavel: removed a warning and a comment]



