CVE-2026-31397
Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
03/04/2026
Última modificación:
03/04/2026
Descripción
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
mm/huge_memory: fix use of NULL folio in move_pages_huge_pmd()<br />
<br />
move_pages_huge_pmd() handles UFFDIO_MOVE for both normal THPs and huge<br />
zero pages. For the huge zero page path, src_folio is explicitly set to<br />
NULL, and is used as a sentinel to skip folio operations like lock and<br />
rmap.<br />
<br />
In the huge zero page branch, src_folio is NULL, so folio_mk_pmd(NULL,<br />
pgprot) passes NULL through folio_pfn() and page_to_pfn(). With<br />
SPARSEMEM_VMEMMAP this silently produces a bogus PFN, installing a PMD<br />
pointing to non-existent physical memory. On other memory models it is a<br />
NULL dereference.<br />
<br />
Use page_folio(src_page) to obtain the valid huge zero folio from the<br />
page, which was obtained from pmd_page() and remains valid throughout.<br />
<br />
After commit d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge<br />
zero folio special"), moved huge zero PMDs must remain special so<br />
vm_normal_page_pmd() continues to treat them as special mappings.<br />
<br />
move_pages_huge_pmd() currently reconstructs the destination PMD in the<br />
huge zero page branch, which drops PMD state such as pmd_special() on<br />
architectures with CONFIG_ARCH_HAS_PTE_SPECIAL. As a result,<br />
vm_normal_page_pmd() can treat the moved huge zero PMD as a normal page<br />
and corrupt its refcount.<br />
<br />
Instead of reconstructing the PMD from the folio, derive the destination<br />
entry from src_pmdval after pmdp_huge_clear_flush(), then handle the PMD<br />
metadata the same way move_huge_pmd() does for moved entries by marking it<br />
soft-dirty and clearing uffd-wp.



