CVE-2026-23161
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
14/02/2026
Last modified:
14/02/2026
Description
In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
mm/shmem, swap: fix race of truncate and swap entry split<br />
<br />
The helper for shmem swap freeing is not handling the order of swap<br />
entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it<br />
gets the entry order before that using xa_get_order without lock<br />
protection, and it may get an outdated order value if the entry is split<br />
or changed in other ways after the xa_get_order and before the<br />
xa_cmpxchg_irq.<br />
<br />
And besides, the order could grow and be larger than expected, and cause<br />
truncation to erase data beyond the end border. For example, if the<br />
target entry and following entries are swapped in or freed, then a large<br />
folio was added in place and swapped out, using the same entry, the<br />
xa_cmpxchg_irq will still succeed, it&#39;s very unlikely to happen though.<br />
<br />
To fix that, open code the Xarray cmpxchg and put the order retrieval and<br />
value checking in the same critical section. Also, ensure the order won&#39;t<br />
exceed the end border, skip it if the entry goes across the border.<br />
<br />
Skipping large swap entries crosses the end border is safe here. Shmem<br />
truncate iterates the range twice, in the first iteration,<br />
find_lock_entries already filtered such entries, and shmem will swapin the<br />
entries that cross the end border and partially truncate the folio (split<br />
the folio or at least zero part of it). So in the second loop here, if we<br />
see a swap entry that crosses the end order, it must at least have its<br />
content erased already.<br />
<br />
I observed random swapoff hangs and kernel panics when stress testing<br />
ZSWAP with shmem. After applying this patch, all problems are gone.



