CVE-2025-39961
Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
09/10/2025
Última modificación:
09/10/2025
Descripción
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
iommu/amd/pgtbl: Fix possible race while increase page table level<br />
<br />
The AMD IOMMU host page table implementation supports dynamic page table levels<br />
(up to 6 levels), starting with a 3-level configuration that expands based on<br />
IOVA address. The kernel maintains a root pointer and current page table level<br />
to enable proper page table walks in alloc_pte()/fetch_pte() operations.<br />
<br />
The IOMMU IOVA allocator initially starts with 32-bit address and onces its<br />
exhuasted it switches to 64-bit address (max address is determined based<br />
on IOMMU and device DMA capability). To support larger IOVA, AMD IOMMU<br />
driver increases page table level.<br />
<br />
But in unmap path (iommu_v1_unmap_pages()), fetch_pte() reads<br />
pgtable->[root/mode] without lock. So its possible that in exteme corner case,<br />
when increase_address_space() is updating pgtable->[root/mode], fetch_pte()<br />
reads wrong page table level (pgtable->mode). It does compare the value with<br />
level encoded in page table and returns NULL. This will result is<br />
iommu_unmap ops to fail and upper layer may retry/log WARN_ON.<br />
<br />
CPU 0 CPU 1<br />
------ ------<br />
map pages unmap pages<br />
alloc_pte() -> increase_address_space() iommu_v1_unmap_pages() -> fetch_pte()<br />
pgtable->root = pte (new root value)<br />
READ pgtable->[mode/root]<br />
Reads new root, old mode<br />
Updates mode (pgtable->mode += 1)<br />
<br />
Since Page table level updates are infrequent and already synchronized with a<br />
spinlock, implement seqcount to enable lock-free read operations on the read path.



