CVE-2025-46560
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
30/04/2025
Last modified:
28/05/2025
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., , ) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Impact
Base Score 3.x
6.50
Severity 3.x
MEDIUM
Vulnerable products and versions
CPE | From | Up to |
---|---|---|
cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | 0.8.0 (including) | 0.8.5 (excluding) |
To consult the complete list of CPE names with products and versions, see this page