Instituto Nacional de ciberseguridad. Sección Incibe
Instituto Nacional de Ciberseguridad. Sección INCIBE-CERT

CVE-2025-62164

Gravedad CVSS v3.1:
ALTA
Tipo:
CWE-20 Validación incorrecta de entrada
Fecha de publicación:
21/11/2025
Última modificación:
04/12/2025

Descripción

*** Pendiente de traducción *** vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

Productos y versiones vulnerables

CPE Desde Hasta
cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* 0.10.2 (incluyendo) 0.11.1 (excluyendo)
cpe:2.3:a:vllm:vllm:0.11.1:rc0:*:*:*:*:*:*
cpe:2.3:a:vllm:vllm:0.11.1:rc1:*:*:*:*:*:*