CVE-2026-43348
Gravedad:
Pendiente de análisis
Tipo:
No Disponible / Otro tipo
Fecha de publicación:
08/05/2026
Última modificación:
12/05/2026
Descripción
*** Pendiente de traducción *** In the Linux kernel, the following vulnerability has been resolved:<br />
<br />
mshv_vtl: Fix vmemmap_shift exceeding MAX_FOLIO_ORDER<br />
<br />
When registering VTL0 memory via MSHV_ADD_VTL0_MEMORY, the kernel<br />
computes pgmap->vmemmap_shift as the number of trailing zeros in the<br />
OR of start_pfn and last_pfn, intending to use the largest compound<br />
page order both endpoints are aligned to.<br />
<br />
However, this value is not clamped to MAX_FOLIO_ORDER, so a<br />
sufficiently aligned range (e.g. physical range<br />
[0x800000000000, 0x800080000000), corresponding to start_pfn=0x800000000<br />
with 35 trailing zeros) can produce a shift larger than what<br />
memremap_pages() accepts, triggering a WARN and returning -EINVAL:<br />
<br />
WARNING: ... memremap_pages+0x512/0x650<br />
requested folio size unsupported<br />
<br />
The MAX_FOLIO_ORDER check was added by<br />
commit 646b67d57589 ("mm/memremap: reject unreasonable folio/compound<br />
page sizes in memremap_pages()").<br />
<br />
Fix this by clamping vmemmap_shift to MAX_FOLIO_ORDER so we always<br />
request the largest order the kernel supports, in those cases, rather<br />
than an out-of-range value.<br />
<br />
Also fix the error path to propagate the actual error code from<br />
devm_memremap_pages() instead of hard-coding -EFAULT, which was<br />
masking the real -EINVAL return.



