CVE-2026-43404
Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
08/05/2026
Última modificación:
12/05/2026
Descripción
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
mm: Fix a hmm_range_fault() livelock / starvation problem<br />
<br />
If hmm_range_fault() fails a folio_trylock() in do_swap_page,<br />
trying to acquire the lock of a device-private folio for migration,<br />
to ram, the function will spin until it succeeds grabbing the lock.<br />
<br />
However, if the process holding the lock is depending on a work<br />
item to be completed, which is scheduled on the same CPU as the<br />
spinning hmm_range_fault(), that work item might be starved and<br />
we end up in a livelock / starvation situation which is never<br />
resolved.<br />
<br />
This can happen, for example if the process holding the<br />
device-private folio lock is stuck in<br />
migrate_device_unmap()->lru_add_drain_all()<br />
sinc lru_add_drain_all() requires a short work-item<br />
to be run on all online cpus to complete.<br />
<br />
A prerequisite for this to happen is:<br />
a) Both zone device and system memory folios are considered in<br />
migrate_device_unmap(), so that there is a reason to call<br />
lru_add_drain_all() for a system memory folio while a<br />
folio lock is held on a zone device folio.<br />
b) The zone device folio has an initial mapcount > 1 which causes<br />
at least one migration PTE entry insertion to be deferred to<br />
try_to_migrate(), which can happen after the call to<br />
lru_add_drain_all().<br />
c) No or voluntary only preemption.<br />
<br />
This all seems pretty unlikely to happen, but indeed is hit by<br />
the "xe_exec_system_allocator" igt test.<br />
<br />
Resolve this by waiting for the folio to be unlocked if the<br />
folio_trylock() fails in do_swap_page().<br />
<br />
Rename migration_entry_wait_on_locked() to<br />
softleaf_entry_wait_unlock() and update its documentation to<br />
indicate the new use-case.<br />
<br />
Future code improvements might consider moving<br />
the lru_add_drain_all() call in migrate_device_unmap() to be<br />
called *after* all pages have migration entries inserted.<br />
That would eliminate also b) above.<br />
<br />
v2:<br />
- Instead of a cond_resched() in hmm_range_fault(),<br />
eliminate the problem by waiting for the folio to be unlocked<br />
in do_swap_page() (Alistair Popple, Andrew Morton)<br />
v3:<br />
- Add a stub migration_entry_wait_on_locked() for the<br />
!CONFIG_MIGRATION case. (Kernel Test Robot)<br />
v4:<br />
- Rename migrate_entry_wait_on_locked() to<br />
softleaf_entry_wait_on_locked() and update docs (Alistair Popple)<br />
v5:<br />
- Add a WARN_ON_ONCE() for the !CONFIG_MIGRATION<br />
version of softleaf_entry_wait_on_locked().<br />
- Modify wording around function names in the commit message<br />
(Andrew Morton)<br />
<br />
(cherry picked from commit a69d1ab971a624c6f112cea61536569d579c3215)



